Posts tagged “Azure

Resizing managed VM disks in Azure


Executive summary:

  • As of 7 March 2019, Microsoft allows resizing data and OS managed disks up via PowerShell and the Azure Portal
  • Microsoft does not allow resizing managed disks down
  • Disk resizing requires VM shutdown and restart

Microsoft charges for the entire amount of allocated disk space of managed disks.

Also see the example in this post.

This is a major difference compared to unmanaged disks where Microsoft charges only for used disk space. IT professionals now have to walk a tight rope in terms of disk capacity in Azure. On one hand you need a minimum amount of free disk space on each disk to guard against running out of disk space scenarios, on the other hand you need to keep the overall disk size as small as possible to avoid the high disk cost. Currently Microsoft charges for managed disk capacity as follows (East US, standard LRS)

For example, if we have a VM with 100 GB data disk – 50 GB are used, we’re billed for S10 which is the next size up in the amount of $5.89/month.

As data grows over time, we may need to expand this disk. We can resize a managed data disk using Powershell as follows:

First we declare the needed variables, and authenticate to our Azure subscription:

#Requires -Version 5
#Requires -Modules AzureRM,AZSBTools

# Install-Module AZSBTools

$LoginName           = 'myname@domain.com'
$SubscriptionName    = 'my subscription name'
$Location            = 'EastUS'
$UseCase             = 'TestMD2'

$VMParameterList = @{
    Name                = "$UseCase-VM"
    ResourceGroupName   = "$UseCase-RG"
    Location            = $Location
    VirtualNetworkName  = "$UseCase-Vnet"
    SubnetName          = "$UseCase-Subnet"
    PublicIpAddressName = "$UseCase-PiP"
    OpenPorts           = @(80,3389)
    Credential          = (Get-SBCredential 'myVMAdmin') 
    Size                = 'Standard_D1_v2'       # Get-AzureRmVMSize -Location $Location
    DataDiskSizeInGb    = 128 
}

Login-AzureRmAccount -Credential (Get-SBCredential $LoginName) | Out-Null # -Environment AzureCloud 
Get-AzureRmSubscription -SubscriptionName $SubscriptionName -WA 0 | 
    Set-AzureRmContext | Out-Null 

Next we provision our test VM:

$Duration = Measure-Command { $VM = New-AzureRmVM @VMParameterList }
Write-Log 'Done in',"$($Duration.Hours):$($Duration.Minutes):$($Duration.Seconds) hh:mm:ss" Green,Cyan
Write-Log ' OS Disk (Managed): size',"$($VM.StorageProfile.OsDisk.DiskSizeGB) GB",'- Underlying storage',$VM.StorageProfile.OsDisk.ManagedDisk.StorageAccountType Green,Cyan,Green,Cyan
$VM.StorageProfile.DataDisks | foreach { 
    Write-Log ' Data Disk (Managed): Lun',$_.Lun,'- size',"$($_.DiskSizeGB) GB" Green,Cyan,Green,Cyan
} 

and we get output like:

Next we RDP to the test VM and write test data to the data disk:

$IPv4Address = (Get-AzureRmPublicIpAddress -ResourceGroupName $VM.ResourceGroupName -Name "$UseCase-PiP").IpAddress  
mstsc /v:$IPv4Address

After login to the VM, we partition and format the data disk:

and write test data to drive f:

Back in Powershell, we resize the data disk. This requires stopping the VM and starting it back up:

$DataDisk = Get-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'Data disk size:',"$($DataDisk.DiskSizeGB) GB",'stopping VM..' Green,Cyan,Green -NoNew
$VM | Stop-AzureRmVM -Force | Out-Null
do { Start-Sleep -Seconds 10 } while (
    (Get-AzureRmVM -ResourceGroupName $VM.ResourceGroupName -Name $VM.Name -Status).Statuses[1].DisplayStatus -ne 'VM deallocated'
)
Write-Log 'Done' Cyan
Write-Log 'Resizing disk',$VM.StorageProfile.DataDisks[0].Name,'to 250 GB' Green,Cyan,Green -NoNew
New-AzureRmDiskUpdateConfig -DiskSizeGB 250 | Update-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'Done' Cyan
$DataDisk = Get-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'New data disk size:',"$($DataDisk.DiskSizeGB) GB" Green,Cyan
Write-Log 'Starting VM',$VM.Name Green,Cyan -NoNew
$VM | Start-AzureRmVM | Out-Null
Write-Log 'Done' Green
mstsc /v:$IPv4Address

and we get output like

Back in the VM we see the new disk size:

We extend the volume to use all provisioned space:

And validate the data.

We cannot however use the same process in reverse to down size a disk.

We can resize the volume down inside the VM:

In Computer Management/Disk Management, we shrink the volume down to 60 GB

Note: To reduce storage cost, shrink the volume to a size that’s just below a billing size. The current billing disk sizes are 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, 2 TB, 4 TB, 8 TB, 16 TB, and 32 TB

Shrinking the disk in Windows:

Back in PowerShell, after shutting down and deallocating the VM, if we try resize the disk down:

We can also resize the OS disk up but not down:

Advertisements

Remove-AzureRMVMBackup function added to AZSBTools PowerShell module


Remove-AzureRMVMBackup function has been added to the AZSBTools PowerShell module to simplify the task of locating and deleting Azure VM backups. The function also disables backup for the provided VM. This function works with both ARM and classic ASM VMs

This is helpful to do before deleting a retired Azure VM.

Remove-AzureRMVMBackup

This function will disable backup of the provided VM. It will also delete existing backups (recovery points – files) of the VM.

Example:

$ParameterList = @{
    LoginName = 'sam@dmain.com'
    SubscriptionName = 'my subscription name'
    VMName = 'Widget3VM'
}
Remove-AzureRMVMBackup @ParameterList

 


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted

To trust the Microsoft PowerShell Gallery repository, then

Install-Module AZSBTools,AzureRM -Force -AllowClobber

AZSBTools contains functions that depend on AzureRM modules, and they’re typically installed together.

To load the AZSBTools, and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in AZSBTools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show


Validate-NameResolution cmdlet added to AZSBTools PowerShell module


In the course of IAAS VM migrations from on-premises to Azure, the VM IP address changes. In Windows VM we typically invoke a command like

ipconfig /registerdns

or

Register-DNSClient

which is part of the DnsClient PowerShell module.

The Validate-NameResolution cmdlet part of the AZSBTools module, queries each DNS server in the current AD to make sure that all DCs in the current AD forest resolve a given computer name to the same IP. This helps to diagnose instances where DNS partition replication is not functioning properly, where some DC’s resolve a given computer name to the old on-premises IP while others resolve the same name to the new Azure IP.

This cmdlet takes one required parameter -ComputerName which accepts one or more computer names

Example: Validate-NameResolution -ComputerName ‘myTestPC’

The cmdlet outputs interim information to the console like:

In addition, it also returns PSCustom Objects, one for each resolved IP address with the following properties: ComputerName, ResolvesTo, and DNSServer similar to:


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted 
Install-Module AZSBTools,AzureRM -Force

AZSBTools contains functions that depend on AzureRM module, and they’re typically installed together.

To load the AZSBTools and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in SB-Tools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show


Azure storage – features and pricing – June 2018


In the ever evolving Azure storage list of offerings, it may be hard to fully realize the available Azure storage offerings and their general cost structure at a given point in time. This post lays out a general summary of Azure storage offerings and costs, from the prospective of a consultant trying to make a recommendation to a large client as what storage type/options to use for which workload and why.

Storage account type

Classic storage

  • This is of type ‘Microsoft.ClassicStorage/storageAccounts’
  • This is considered legacy and should be migrated to GPv1 which provides backward compatibility to Azure classic (ASM) services

General purpose v1 (GPv1)

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘Storage’
  • Does not support Cool and Archive access tier attributes
  • Lower read/write transaction cost compared to GPv2
  • Can be used with Azure classic (ASM) services

General purpose v2 (GPv2)

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘StorageV2’
  • Supports Cool and Archive access tier attributes
  • Access Tier attribute (Hot/Cool) is exposed at the account level
  • Cannot be used with Azure classic (ASM) services
  • Compared to Blob Storage account: GPv2 charge for Cool early deletion but not Cool data writes (per GB)

Blob storage

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘BlobStorage’
  • This is a sub-type of GPv2 that supports only Block Blobs – not Page Blobs, Azure files, …
  • Minor pricing difference: charge for Cool data writes (per GB) but not Cool early deletion
  • Features similar to GPv2:
    • Supports Cool and Archive access tier attributes
    • Access Tier attribute (Hot/Cool) is exposed at the account level
    • Cannot be used with Azure classic (ASM) services

GPv1 and Blob Block accounts can be upgraded to a GPv2 account. This change cannot be reversed.

Set-AzureRmStorageAccount -ResourceGroupName <resource-group> -AccountName <storage-account> -UpgradeToStorageV2

Data Access Tiers

Data Access Tiers are Hot, Cool, and Archive. They represent 3 different physical storage platforms.

For the purpose of simplification, I will refer to LRS pricing in US East Azure region in US dollars for the next 450TB/month.

  • Data Access Tiers are supported in GPv2 accounts only (and the Block Blob storage sub-type) not GPv1
  • A blob cannot be read directly from the Archive tier. To read a blob in Archive, it must first be moved to Hot or Cool.
  • Data in Cool tier is subject to minimum stay period of 30 days. So, moving 1 TB of data to Cool tier for example, obligates the client to paying for that storage for 30 days at a minimum even if the data is moved or deleted before the end of the 30 days. This is implemented via the ‘Early deletion charge’ which is prorated.
  • Data in Archive tier is subject to minimum stay period of 180 days.

Hot

  • This is the regular standard tier where data can be routinely read and written
  • Storage cost is 2 cents per GB per month
  • Lowest read and write IO charges (5 cents per 10k write, 0.4 cents per 10k read)
  • No data retrieval charge (This is the cost to copy data to Hot tier from another tier before it can be read – already in Hot tier)

Cool

  • 25% cheaper storage cost down to 1.5 cents per GB per month
  • More costly read and write IO charges (10 cents per 10k write = 200% Hot, 1 cent per 10k read = 250% Hot)
  • 1 cent per GB data retrieval charge (this is the cost to copy the data to Hot tier, which is a required interim step to read data that resides in Cool tier)

Archive

  • 90% cheaper storage cost down to 0.2 cents per GB per month
  • Most costly read and write IO charges (10 cents per 10k write = 200% Hot, 5 dollars per 10k read = 125,000% Hot)
  • 2 cent per GB data retrieval charge (this is the cost to copy the data to Hot tier, which is a required interim step to read data that resides in Archive tier)
  • The archive tier can only be applied at the blob level, not at the storage account level
  • Blobs in the archive storage tier have several hours of latency (Is this tier using tape not disk!?)

Geo-replication

Geo-replication refers to automatically keeping a copy of a given Storage Account data in another Azure region that’s 400 miles or more away from the primary site. The primary Azure site is the Azure region where the Storage account resides and where the usual read/write transactions occur. The choice of a secondary site is a constant determined by Microsoft and is available if the client chooses the geo-replication feature of a Storage Account. The following is the list of the Azure region pairs as of 18 June 2018

Data in the secondary Azure region cannot be accessed. It is only used under the following conditions:

  • Microsoft declares a region-wide outage. For example East US. This is a Microsoft triggered – not client triggered event
  • Microsoft will first try to restore the service in that region. During that time data is not accessible for read or write. It’s unclear how long will Microsoft pursue this effort before moving to geo-failover.
  • Microsoft initiates geo-failover from the primary to secondary region. That’s all data of all tenants in the primary region.
    • This is essentially a DNS change to re-point the storage end points’ fully qualified domain names to the secondary region
    • The RPO (Recovery Point Objective) is 15 minutes. That’s to say up to 15 minutes worth of data may be lost.
    • The RTO (Recovery Time Objective) is unclear. That’s the time between primary site outage to data availability for read and write on the secondary site
    • At the end of the geo-failover, read/write access is restored (for geo-replicated storage accounts only of course)
    • At a later time, Microsoft will perform a geo-failback which is the same process in the reverse direction
  • This is a process that never happened before. No Azure data center ever sustained a complete loss.
  • It’s unclear when failback will be triggered, whether it will include down-time, or another 15 minute data loss.

A storage account can be configured in one of several choices with respect to geo-replication:

LRS (Locally redundant storage)

When a write request is sent to Azure Storage Account, the transaction is fully replicated on three different physical disks across three fault domains and upgrade domains inside the primary location, then success is returned back to the client.

GRS (Geo-redundant storage)

In addition to triple synchronous local writes, GRS adds triple asynchronous remote writes of each data block.  Data is asynchronously replicated to the secondary Azure site within 15 minutes.

ZRS (Zone-redundant storage)

ZRS is similar to LRS but it provides slightly more durability than LRS (12 9’s instead of 11 9’s for LRS over a given year).

It’s ony available to GPv2 accounts.

RA-GRS (Read access geo-redundant storage)

RA-GRS is similar to GRS but it provides read access to the data in the secondary Azure site.

 

 

 

 

 

 


Azure Automation – getting started


Azure Automation allows Azure administrators to run PowerShell and other scripts against an Azure subscription. They provide several benefits versus running the same scripts from the user desktop computer including:

  • Scripts run in Azure and are not dependent on the end-user desktop
  • Scripts are highly available by design.
  • Scheduling is a built-in feature
  • Authentication is streamlined for both classic ASM and current ARM resources

To get started with Azure Automation;

  1. Create an Azure Automation account
  2. Install needed PowerShell modules
  3. Create, run, test, schedule scripts

Create an Azure Automation account

In the current portal, Create Resource > Monitoring and Management > Automation > Create

In the ‘Add Automation Account’ blade enter/select a name for the Automation Account, Azure Subscription, Resource Group, and Azure Location

Azure will take a few minutes to create the automation account and associated objects.

We can now run scripts against the Azure subscription selected above. Here are some examples:

Create a test script

In the Automation Account blade, click Runbooks

Click ‘Add a runbook’ link on the top to create a new runbook of type PowerShell

Azure creates the runbook/script, and opens the ‘Edit PowerShell Runbook’ blade

Type in the desired command, click Save, then ‘Test pane’

In the ‘Test’ blade, click ‘Start’. Azure will queue and execute the script

Notes:

  • This is not like the PowerShell ISE. There’s no auto-completion for one thing.
  • If Azure comes across a bad command, it will try to execute THE ENTIRE SCRIPT repeatedly, and is likely to get stuck.
  • This shell does not support user interaction. So, any cmdlet that would typically require a user confirmation/interaction of any type will fail. For example, Install-Module cmdlet will fail since it requires user approval/interaction to install PowerShellGet.

Install needed modules

To see available modules click ‘Modules’ in the Automation Account blade

Click ‘Browse Gallery’ on top and search for the desired module

These modules come for the Microsoft PowerShell Gallery.

Click on the desired module, view its functions, and click Import to import it to this automation shell

Now that the module is imported, we can use it in scripting in this particular automation shell:

 

 


New-SBAZServicePrincipal cmdlet to create new Azure AD Service Principal added to AZSBTools PowerShell module


For the use case of running PowerShell scripts that perform tasks on objects in an Azure subscription, we need to be able to run such scripts under a user context other than the script author which is what typically happens during script development. A Service Principal is an Azure AD user intended for this purpose. The New-SBAZServicePrincipal function automates and simplifies the process of creating an Azure Service principal.

Parameters

The New-SBAZServicePrincipal function takes the following parameters

ServicePrincipalName

This parameter accepts one or more Service Principal names

Environment

This parameter accepts a value that represents which Azure cloud to create the SPN in. This parameter default to Azure Commercial cloud. As of 15 March 2018 that list is:

  • AzureCloud
  • AzureUSGovernment
  • AzureChinaCloud
  • AzureGermanCloud

To see the current list, use: (Get-AzureRMEnvironment).Name

Role

This parameter is used to assign Role/Permissions for the Service Principal in the current subscription.
The default value is ‘Owner’ role.
As of 16 March 2018 the following default roles are defined:
API Management Service Contributor
Application Insights Component Contributor
Automation Operator
BizTalk Contributor
Classic Network Contributor
Classic Storage Account Contributor
Classic Storage Account Key Operator Service Role
Classic Virtual Machine Contributor
ClearDB MySQL DB Contributor
Contributor
Cosmos DB Account Reader Role
Data Factory Contributor
Data Lake Analytics Developer
DevTest Labs User
DNS Zone Contributor
DocumentDB Account Contributor
Intelligent Systems Account Contributor
Log Analytics Contributor
Log Analytics Reader
Network Contributor
New Relic APM Account Contributor
Owner
Reader
Redis Cache Contributor
Scheduler Job Collections Contributor
Search Service Contributor
Security Manager
SQL DB Contributor
SQL Security Manager
SQL Server Contributor
Storage Account Contributor
Storage Account Key Operator Service Role
Traffic Manager Contributor
User Access Administrator
Virtual Machine Contributor
Web Plan Contributor
Website Contributor
For more details on roles, type in:

Get-AzureRmRoleDefinition | select name,description,actions | Out-GridView

Output

The New-SBAZServicePrincipal function returns a PS Object for each input Service Principal Name containing the following properties:
ServicePrincipalName
TenantId
Environment
Role

Details

The New-SBAZServicePrincipal function performs the following tasks for each provided Service Principal name:

  1. Create/Validate Azure AD App. The Azure AD App is required to create a Service Principal. It carries the same name and has an initial URL matching the same name as well
  2. Create/Validate Azure AD Service Principal. The user is prompted to enter the desired password for the SPN. The password is encrypted and saved in the user’s temp folder for use with future automations
  3. Assign the provided Role to the SPN for the current subscription. By default this is the ‘Owner’ role. This allows the created SPN to perform all tasks against the current subscription.

Registered Apps can be also viewed in the Azure portal under Azure Active Directory/App Registrations blade:

Example

$SPList = New-SBAZServicePrincipal -ServicePrincipalName PowerShell01,samtest1

This example creates 2 Service Prinsipals; PowerShell01 and samtest1 in the default Azure Commercial cloud, and assigns them the default Owner Role in the current subscription.

The New-SBAZServicePrincipal function first pops the Azure login Window to identify which subscription to use:

This function has been tested with both Azure Commercial and Azure US GOV clouds.

Next enter the desired password for each of the 2 provided Service Principals:

The function saves the encrypted password to the user temp folder for future use/automation.

It also display console output similar to:

The Service Principals can be used now to run other PowerShell scripts

The newly registered/validated Apps can also be viewed from the Azure Portal


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Install-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -Force

AZSBTools contains functions that depend on POSH-SSH, SB-Tools, and AzureRM modules, and they’re typically installed together.

To load the POSH-SSH, SB-Tools, AZSBTools, and AzureRM modules type:

Import-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in SB-Tools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show


StorSimple 8k software release 4.0


Around mid February 2017, Microsoft released StorSimple software version 4.0 (17820). This is a release that includes firmware and driver updates that require using Maintenance mode and the serial console.

Using this PowerShell script to save the Version 4.0 cmdlets and compare them to Version 3.0, I got:

storsimple40-a

Trying the new cmdlets, the Get-HCSControllerReplacementStatus cmdlet returns a message like:

storsimple40-b

The Get-HCSRehydrationJob returns no output (no restore jobs are running)

The Invoke-HCSDisgnostics seems pretty useful and returns output similar to:

storsimple40-c

The cmdlet takes a little while to run. In this case it took 14 minutes and 38 seconds:

storsimple40-d

It returns data from its several sections like;

System Information section:

storsimple40-e

This is output similar to what we get from the Get-HCSSystem cmdlet for both controllers.

Update Availability section:

storsimple40-f

This is output similar to Get-HCSUpdateAvailability cmdlet, although the MaintenanceModeUpdatesTitle property is empty !!??

storsimple40-g

Cluster Information section:

storsimple40-h

This is new exposed information. I’m guessing this is the output of some Get-HCSCluster cmdlet, but this is pure speculation on my part. I’m also guessing that this is a list of clustered roles in a traditional Server 2012 R2 failover cluster.

Service Information section:

storsimple40-i

This is also new exposed information. Get-Service is not an exposed cmdlet.

Failed Hardware Components section:

storsimple40-j

This is new exposed information. This device is in good working order, so this list may be false warnings.

Firmware Information section:

storsimple40-k

This output is similar to what we get from Get-HCSFirmwareVersion cmdlet

Network Diagnostics section:

storsimple40-l

Most of this information is not new, but it’s nicely bundled into one section.

Performance Diagnostics section:

storsimple40-m

Finally, this section provides new information about read and write latency to the configured Azure Storage accounts.

The full list of exposed cmdlets in Version 4.0 is:

Clear-DnsClientCache
Set-CloudPlatform
Select-Object
Restart-HcsController
Resolve-DnsName
Out-String
Out-Default
Set-HcsBackupApplianceMode
Measure-Object
Invoke-HcsmServiceDataEncryptionKeyChange
Invoke-HcsDiagnostics
Get-History
Get-Help
Get-HcsWuaVersion
Get-HcsWebProxy
Invoke-HcsSetupWizard
Set-HcsDnsClientServerAddress
Set-HcsNetInterface
Set-HcsNtpClientServerAddress
Test-HcsNtp
Test-HcsmConnection
Test-Connection
Sync-HcsTime
Stop-HcsController
Start-Sleep
Start-HcsUpdate
Start-HcsPeerController
Start-HcsHotfix
Start-HcsFirmwareCheck
Set-HcsWebProxy
Set-HcsSystem
Set-HcsRemoteManagementCert
Set-HcsRehydrationJob
Set-HcsPassword
Get-HcsUpdateStatus
Trace-HcsRoute
Get-HcsUpdateAvailability
Get-HcsSupportAccess
Enable-HcsRemoteManagement
Enable-HcsPing
Enable-HcsNetInterface
Disable-HcsWebProxy
Disable-HcsSupportAccess
Disable-HcsRemoteManagement
Enable-HcsSupportAccess
Disable-HcsPing
Test-NetConnection
Test-HcsStorageAccountCredential
TabExpansion2
Reset-HcsFactoryDefault
prompt
Get-NetAdapter
Disable-HcsNetInterface
Enable-HcsWebProxy
Enter-HcsMaintenanceMode
Enter-HcsSupportSession
Get-HcsRoutingTable
Get-HcsRemoteManagementCert
Get-HcsRehydrationJob
Get-HcsNtpClientServerAddress
Get-HcsNetInterface
Get-HcsFirmwareVersion
Get-HcsDnsClientServerAddress
Get-HCSControllerReplacementStatus
Get-HcsBackupApplianceMode
Get-Credential
Get-Command
Export-HcsSupportPackage
Export-HcsDataContainerConfig
Exit-PSSession
Exit-HcsMaintenanceMode
Get-HcsSystem
Update-Help


StorSimple 8k series as a backup target?


19 December 2016

After a conference call with Microsoft Azure StorSimple product team, they explained:

  •  “The maximum recommended full backup size when using an 8100 as a primary backup target is 10TiB. The maximum recommended full backup size when using an 8600 as a primary backup target is 20TiB”
  • “Backups will be written to array, such that they reside entirely within the local storage capacity”

Microsoft acknowledge the difficulty resulting from the maximum provisionable space being 200 TB on an 8100 device, which limits the ability to over-provision thin-provisioned tiered iSCSI volumes when expecting significant deduplication/compression savings with long term backup copy job Veeam files for example.

Conclusion

  • When used as a primary backup target, StorSimple 8k devices are intended for SMB clients with backup files under 10TB/20TB for the 8100/8600 models respectively
  •  Compared to using an Azure A4 VM with attached disks (page blobs), StorSimple provides 7-22% cost savings over 5 years

15 December 2016

On 13 December 2016, Microsoft announced the support of using StorSimple 8k devices as a backup target. Many customers have asked for StorSimple to support this workload. StorSimple hybrid cloud storage iSCSI SAN features automated tiering at the block level from its SSD to SAS to Azure tiers. This makes it a perfect fit for Primary Data Set for unstructured data such as file shares. It also features cloud snapshots which provide the additional functionality of data backup and disaster recovery. That’s primary storage, secondary storage (short term backups), long term storage (multiyear retention), off site storage, and multi-site storage, all in one solution.

However, the above features that lend themselves handy to the primary data set/unstructured data pose significant difficulties when trying to use this device as a backup target, such as:

  • Automated tiering: Many backup software packages (like Veeam) would do things like a forward incremental, synthetic full, backup copy job for long term retention. All of which would scan/access files that are typically dozens of TB each. This will cause the device to tier data to Azure and back to the local device in a way that slows things down to a crawl. DPM is even worse; specifically the way it allocates/controls volumes.
  • The arbitrary maximum allocatable space for a device (200TB for an 8100 device for example), makes it practically impossible to use the device as backup target for long term retention.
    • Example: 50 TB volume, need to retain 20 copies for long term backup. Even if change rate is very low and actual bits after deduplication and compression of 20 copies is 60 TB, we cannot provision 20x 50 TB volumes, or a 1 PB volume. Which makes the maximum workload size around 3TB if long term retention requires 20 recovery points. 3TB is way too small of a limit for enterprise clients who simply want to use Azure for long term backup where a single backup file is 10-200 TB.
  • The specific implementation of the backup catalog and who (the backup software versus StorSimple Manager service) has it.
  • Single unified tool for backup/recovery – now we have to use the backup software and StorSimple Manager, which do not communicate and are not aware of each other
  • Granular recoveries (single file/folder). Currently to recover a single file from snapshot, we must clone the entire volume.

In this article published 6 December 2016, Microsoft lays out their reference architecture for using StorSimple 8k device as a Primary Backup Target for Veeam

primarybackuptargetlogicaldiagram

There’s a number of best practices relating to how to configure Veeam and StorSimple in this use case, such as disabling deuplication, compression, and encryption on the Veeam side, dedicating the StorSimple device for the backup workload, …

The interesting part comes in when you look at scalability. Here’s Microsoft’s listed example of a 1 TB workload:

ss-backup-target03

This architecture suggests provisioning 5*5TB volumes for the daily backups and a 26TB volume for the weekly, monthly, and annual backups:

ss-backup-target04

This 1:26 ratio between the Primary Data Set and Vol6 used for the weekly, monthly, and annual backups suggests that the maximum supported Primary Data Set is 2.46 TB (maximum volume size is 64 TB) !!!???

ss-backup-target05

This reference architecture suggests that this solution may not work for a file share that is larger than 2.5TB or may need to be expanded beyond 2.5TB

Furthermore, this reference architecture suggests that the maximum Primary Data Set cannot exceed 2.66TB on an 8100 device, which has 200TB maximum allocatable capacity, reserving 64TB to be able to restore the 64TB Vol6

ss-backup-target06

It also suggests that the maximum Primary Data Set cannot exceed 8.55TB on an 8600 device, which has 500TB maximum allocatable capacity, reserving 64TB to be able to restore the 64TB Vol6

ss-backup-target07

Even if we consider cloud snapshots to be used only in case of total device loss – disaster recovery, and we allocate the maximum device capacity, the 8100 and 8600 devices can accommodate 3.93TB and 9.81TB respectively:

ss-backup-target08

Conclusion:

Although the allocation of 51TB of space to backup 1 TB of data resolves the tiering issue noted above, it significantly erodes the value proposition provided by StorSimple.



StorSimple 8k series software version reference


This post lists StorSimple software versions, their release dates, and major new features for reference. Microsoft does not publish release dates for StorSimple updates. The release dates below are from published documentation and/or first hand experience. They may be off by up to 15 days.

  • Version 4.0 (17820) – released 12 February 2017 – see release notes, and this post.
    • Major new features: Invoke-HCSDiagnostics new cmdlet, and heatmap based restores
  • Version 3.0 (17759) – released 6 September 2016 – see release notes, and this post.
    • Major new features: The use of a StorSimple as a backup target (9/9/2016 it’s unclear what that means)
  • Version 2.2 (17708) – see release notes
  • Version 2.1 (17705) – see release notes
  • Version 2.0 (17673) – released January 2016 – see release notes, this post, and this post
    • Major new features: Locally pinned volumes, new virtual device 8020 (64TB SSD), ‘proactive support’, OVA (preview)
  • Version 1.2 (17584) – released November 2015 – see release notesthis post, and this post
    • Major new features: (Azure-side) Migration from legacy 5k/7k devices to 8k devices, support for Azure US GOV, support for cloud storage from other public clouds as AWS/HP/OpenStack, update to latest API (this should allow us to manage the device in the new portal, yet this has not happened as of 9/9/2016)
  • Version 1.1 (17521) – released October 2015 – see release notes
  • Version 1.0 (17491) – released 15 September 2015 – see release notes and this post
  • Version 0.3 (remains 17361) – released February 2015 – see release notes
  • Version 0.2 (17361) – released January 2015 – see release notes and this post
  • Version 0.1 (17312) – released October 2014 – see release notes
  • Version GA (General Availability – 0.0 – Kernel 6.3.9600.17215) – released July 2014 – see release notes – This is the first Windows OS based StorSimple software after Microsoft’s acquisition of StorSimple company.
  • As Microsoft acquired StorSimple company, StorSimple 5k/7k series ran Linux OS based StorSimple software version 2.1.1.249 – August 2012