Latest

Resizing managed VM disks in Azure


Executive summary:

  • As of 7 March 2019, Microsoft allows resizing data and OS managed disks up via PowerShell and the Azure Portal
  • Microsoft does not allow resizing managed disks down
  • Disk resizing requires VM shutdown and restart

Microsoft charges for the entire amount of allocated disk space of managed disks.

Also see the example in this post.

This is a major difference compared to unmanaged disks where Microsoft charges only for used disk space. IT professionals now have to walk a tight rope in terms of disk capacity in Azure. On one hand you need a minimum amount of free disk space on each disk to guard against running out of disk space scenarios, on the other hand you need to keep the overall disk size as small as possible to avoid the high disk cost. Currently Microsoft charges for managed disk capacity as follows (East US, standard LRS)

For example, if we have a VM with 100 GB data disk – 50 GB are used, we’re billed for S10 which is the next size up in the amount of $5.89/month.

As data grows over time, we may need to expand this disk. We can resize a managed data disk using Powershell as follows:

First we declare the needed variables, and authenticate to our Azure subscription:

#Requires -Version 5
#Requires -Modules AzureRM,AZSBTools

# Install-Module AZSBTools

$LoginName           = 'myname@domain.com'
$SubscriptionName    = 'my subscription name'
$Location            = 'EastUS'
$UseCase             = 'TestMD2'

$VMParameterList = @{
    Name                = "$UseCase-VM"
    ResourceGroupName   = "$UseCase-RG"
    Location            = $Location
    VirtualNetworkName  = "$UseCase-Vnet"
    SubnetName          = "$UseCase-Subnet"
    PublicIpAddressName = "$UseCase-PiP"
    OpenPorts           = @(80,3389)
    Credential          = (Get-SBCredential 'myVMAdmin') 
    Size                = 'Standard_D1_v2'       # Get-AzureRmVMSize -Location $Location
    DataDiskSizeInGb    = 128 
}

Login-AzureRmAccount -Credential (Get-SBCredential $LoginName) | Out-Null # -Environment AzureCloud 
Get-AzureRmSubscription -SubscriptionName $SubscriptionName -WA 0 | 
    Set-AzureRmContext | Out-Null 

Next we provision our test VM:

$Duration = Measure-Command { $VM = New-AzureRmVM @VMParameterList }
Write-Log 'Done in',"$($Duration.Hours):$($Duration.Minutes):$($Duration.Seconds) hh:mm:ss" Green,Cyan
Write-Log ' OS Disk (Managed): size',"$($VM.StorageProfile.OsDisk.DiskSizeGB) GB",'- Underlying storage',$VM.StorageProfile.OsDisk.ManagedDisk.StorageAccountType Green,Cyan,Green,Cyan
$VM.StorageProfile.DataDisks | foreach { 
    Write-Log ' Data Disk (Managed): Lun',$_.Lun,'- size',"$($_.DiskSizeGB) GB" Green,Cyan,Green,Cyan
} 

and we get output like:

Next we RDP to the test VM and write test data to the data disk:

$IPv4Address = (Get-AzureRmPublicIpAddress -ResourceGroupName $VM.ResourceGroupName -Name "$UseCase-PiP").IpAddress  
mstsc /v:$IPv4Address

After login to the VM, we partition and format the data disk:

and write test data to drive f:

Back in Powershell, we resize the data disk. This requires stopping the VM and starting it back up:

$DataDisk = Get-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'Data disk size:',"$($DataDisk.DiskSizeGB) GB",'stopping VM..' Green,Cyan,Green -NoNew
$VM | Stop-AzureRmVM -Force | Out-Null
do { Start-Sleep -Seconds 10 } while (
    (Get-AzureRmVM -ResourceGroupName $VM.ResourceGroupName -Name $VM.Name -Status).Statuses[1].DisplayStatus -ne 'VM deallocated'
)
Write-Log 'Done' Cyan
Write-Log 'Resizing disk',$VM.StorageProfile.DataDisks[0].Name,'to 250 GB' Green,Cyan,Green -NoNew
New-AzureRmDiskUpdateConfig -DiskSizeGB 250 | Update-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'Done' Cyan
$DataDisk = Get-AzureRmDisk -ResourceGroupName $VM.ResourceGroupName -DiskName $VM.StorageProfile.DataDisks[0].Name
Write-Log 'New data disk size:',"$($DataDisk.DiskSizeGB) GB" Green,Cyan
Write-Log 'Starting VM',$VM.Name Green,Cyan -NoNew
$VM | Start-AzureRmVM | Out-Null
Write-Log 'Done' Green
mstsc /v:$IPv4Address

and we get output like

Back in the VM we see the new disk size:

We extend the volume to use all provisioned space:

And validate the data.

We cannot however use the same process in reverse to down size a disk.

We can resize the volume down inside the VM:

In Computer Management/Disk Management, we shrink the volume down to 60 GB

Note: To reduce storage cost, shrink the volume to a size that’s just below a billing size. The current billing disk sizes are 32 GB, 64 GB, 128 GB, 256 GB, 512 GB, 1 TB, 2 TB, 4 TB, 8 TB, 16 TB, and 32 TB

Shrinking the disk in Windows:

Back in PowerShell, after shutting down and deallocating the VM, if we try resize the disk down:

We can also resize the OS disk up but not down:

Advertisements

Remove-AzureRMVMBackup function added to AZSBTools PowerShell module


Remove-AzureRMVMBackup function has been added to the AZSBTools PowerShell module to simplify the task of locating and deleting Azure VM backups. The function also disables backup for the provided VM. This function works with both ARM and classic ASM VMs

This is helpful to do before deleting a retired Azure VM.

Remove-AzureRMVMBackup

This function will disable backup of the provided VM. It will also delete existing backups (recovery points – files) of the VM.

Example:

$ParameterList = @{
    LoginName = 'sam@dmain.com'
    SubscriptionName = 'my subscription name'
    VMName = 'Widget3VM'
}
Remove-AzureRMVMBackup @ParameterList

 


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted

To trust the Microsoft PowerShell Gallery repository, then

Install-Module AZSBTools,AzureRM -Force -AllowClobber

AZSBTools contains functions that depend on AzureRM modules, and they’re typically installed together.

To load the AZSBTools, and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in AZSBTools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show

Unmanaged Azure Disk Snapshot functions added to AZSBTools PowerShell module to View, Add, Delete


3 new functions have been added to the AZSBTools PowerShell module to view, add, and delete snapshots of unmanaged Azure disks

The AzureRM.Compute PowerShell module (v 5.9.1 as of 1 Jan 2019) provides the Get-AzureRmSnapshotNew-AzureRmSnapshot,  and Remove-AzureRmSnapshot cmdlets to handle disk snapshots of managed disks. Although the same snapshot capabilities are available for unmanaged disks, there are no PS cmdlets to provide similar functionality for unmanaged disk snapshots.

Managed versus unmanaged disks

Managed Disks handle the storage account creation/management to avoid hitting the 20k IOPS standard storage account limit for example (no more than 40 standard disks in the same Storage Account @ 500 IOPS each). This is a pretty nice feature but it comes at a steep price. Consider Standard LRS USEast region pricing for example. A 1 TB data disk (S30) is $40.96/month

If we use 10 GB of this disk we pay $41/month even if we power off and deallocate the VM. We still pay the $41/month for this 1 TB disk including the 990 GB unused space.

In comparison, the same 1 TB unmanaged disk using the same 500 IOPS with 10 GB of used space costs $0.45/month (10 GB * $0.045 per GB).

This is because we’re billed only for the 10 GB used space not the 1023 GB allocated space, whether the VM is up and running or powered off and deallocated.

In short, managed disks come at the cost of paying for allocated space not used space. Given that used space is often as little as 1% of the allocated space, I highly recommend against using managed disks at this time (1 Jan 2019)

Well, using my own advise I find it hard to use unmanaged disks via PowerShell to perform routine Azure infrastructure management tasks such as:

  • Determine the amount of used disk space (separate post on that)
  • Manage disk snapshots; list, create, delete (the subject of this post)
  • Convert managed to unmanaged disk (separate post on that)

The functions Get-AzureRMUnmanagedDiskSnapshot, New-AzureRMUnmanagedDiskSnapshot, and Remove-AzureRMUnmanagedDiskSnapshot have been added to the AZSBTools module to simplify management of unmanaged disk snapshots via PowerShell. These functions do not affect the VM lease on its disk(s), do not require VM shutdown, and do not interfere with VM operation.

Get-AzureRMUnmanagedDiskSnapshot

This function will list disk snapshots for a given unmanaged disk. This applies to unmanaged ARM disk snapshots only not classic ASM disks or managed ARM disks. This function depends on the AzureRM and Azure PowerShell modules available in the PowerShell Gallery. To install required module: Install-Module AzureRM, Azure 

Example:

$ParameterList = @{
    LoginName = 'sam@dmain.com'
    SubscriptionName = 'my subscription name'
    StorageAccountName = 'storfluxwidget3vm'
    ContainerName = 'vhds'
    BlobName = 'Widget3VM-20181226-093810.vhd'
}
Get-AzureRMUnmanagedDiskSnapshot @ParameterList

To list snapshots in a given time frame, we filter on the SnapshotTime property of the output provided by this function. This function returns objects of type Microsoft.WindowsAzure.Storage.Blob.CloudPageBlob for each snapshot found that matches the provided storageaccount/container/blob parameters. The SnapshotTime property is of type DateTimeOffset which cannot be compared directly to DateTime type. To do the required filtering/comparison, we use the [DateTimeOffset].ToLocalTime{() method as in:

Get-AzureRMUnmanagedDiskSnapshot @ParameterList | 
    where { [DateTime]$_.SnapshotTime.ToLocalTime().ToString() -GE [DateTime]'2019-01-02 8:45' }

This will list snapshots taken at or after 2 Jan 2019 8:45 (am local time)

New-AzureRMUnmanagedDiskSnapshot

This function will create new disk snapshot for a given unmanaged disk.

Example:

$ParameterList = @{
    LoginName = 'sam@dmain.com'
    SubscriptionName = 'my subscription name'
    StorageAccountName = 'storfluxwidget3vm'
    ContainerName = 'vhds'
    BlobName = 'Widget3VM-20181226-093810.vhd'
}
New-AzureRMUnmanagedDiskSnapshot @ParameterList

Remove-AzureRMUnmanagedDiskSnapshot

This function will remove one or more disk snapshots for a given unmanaged disk. In addition to the 5 parameters LoginName, SubscriptionName, StorageAccountName, ContainerName, and BlobName that this group of functions take, this function also takes 2 additional parameters; FromDate and ToDate. These 2 parameters allow us to delete snapshots taken duing a given time frame.

Example:

$ParameterList = @{
    LoginName = 'sam@dmain.com'
    SubscriptionName = 'my subscription name'
    StorageAccountName = 'storfluxwidget3vm'
    ContainerName = 'vhds'
    BlobName = 'Widget3VM-20181226-093810.vhd'
}
Remove-AzureRMUnmanagedDiskSnapshot @ParameterList

This deletes all disk snapshots for the provided unmanaged disk in the provided StorageAccount/Container

 


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted

To trust the Microsoft PowerShell Gallery repository, then

Install-Module AZSBTools,AzureRM -Force -AllowClobber

AZSBTools contains functions that depend on AzureRM modules, and they’re typically installed together.

To load the AZSBTools, and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in AZSBTools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show

Page File functions added to AZSBTools PowerShell module to Get, Set, Remove page file(s)


3 new functions have been added to the AZSBTools PowerShell module to view, create and modify page file settings on a Windows computer running Windows 2008/Windows 7 and above versions.

Get-PageFile

This simple function takes no parameters and returns a PS custom object for each page file that has the following 3 properties:

  • DriveLetter: such as ‘c’, or ‘e’, …
  • InitialSizeMB: such as 1024 (0 value indicates a system-managed page file)
  • MaximumSizeMB: such as 4096 (0 value indicates a system-managed page file)

For example:

Set-PageFile

This function changes the page file setting on a given drive letter to the specified initial and maximum size in MB. It takes one parameter that’s similar to the PS custom object returned by the Get-PageFile function.

The -PageFile parameter of the Set-PageFile function accepts a PS Custom Object containing the following 3 properties:

  • DriveLetter such as ‘c’
  • InitialSizeMB such as 1024 (0 value indicate system managed page file)
  • MaximumSizeMB such as 4096 (0 value indicate system managed page file)

This object can be constructed manually as in:

$PageFile = [PSCustomObject]@{
    DriveLetter   = 'c'
    InitialSizeMB = 0 
    MaximumSizeMB = 0 
}

or obtained from the Get-PageFile function

For example, to configure all page files on all drives to system managed size:

Get-PageFile | foreach { $_.InitialSizeMB = 0; $_.MaximumSizeMB = 0; $_ } | Set-PageFile

Note that changes to page file require a reboot to take effect. Rebooting is not part of this function.

Remove-PageFile

Finally this simple function will remove page file from a given drive. It takes one parameter being the drive letter such as ‘e’

The 3 functions can be used in user scripts to move page file from one drive to another. For example:

Set-PageFile -PageFile ([PSCustomObject]@{
    DriveLetter   = 'e' 
    InitialSizeMB = 0
    MaximumSizeMB = 0
})
Remove-PageFile 'c' -EA 0 


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted

To trust the Microsoft PowerShell Gallery repository, then

Install-Module AZSBTools,AzureRM -Force -AllowClobber

AZSBTools contains functions that depend on AzureRM modules, and they’re typically installed together.

To load the AZSBTools, and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in AZSBTools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show

Validate-NameResolution cmdlet added to AZSBTools PowerShell module


In the course of IAAS VM migrations from on-premises to Azure, the VM IP address changes. In Windows VM we typically invoke a command like

ipconfig /registerdns

or

Register-DNSClient

which is part of the DnsClient PowerShell module.

The Validate-NameResolution cmdlet part of the AZSBTools module, queries each DNS server in the current AD to make sure that all DCs in the current AD forest resolve a given computer name to the same IP. This helps to diagnose instances where DNS partition replication is not functioning properly, where some DC’s resolve a given computer name to the old on-premises IP while others resolve the same name to the new Azure IP.

This cmdlet takes one required parameter -ComputerName which accepts one or more computer names

Example: Validate-NameResolution -ComputerName ‘myTestPC’

The cmdlet outputs interim information to the console like:

In addition, it also returns PSCustom Objects, one for each resolved IP address with the following properties: ComputerName, ResolvesTo, and DNSServer similar to:


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Set-PSRepository -Name PSGallery -InstallationPolicy Trusted 
Install-Module AZSBTools,AzureRM -Force

AZSBTools contains functions that depend on AzureRM module, and they’re typically installed together.

To load the AZSBTools and AzureRM modules type:

Import-Module AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in SB-Tools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show

Azure storage – features and pricing – June 2018


In the ever evolving Azure storage list of offerings, it may be hard to fully realize the available Azure storage offerings and their general cost structure at a given point in time. This post lays out a general summary of Azure storage offerings and costs, from the prospective of a consultant trying to make a recommendation to a large client as what storage type/options to use for which workload and why.

Storage account type

Classic storage

  • This is of type ‘Microsoft.ClassicStorage/storageAccounts’
  • This is considered legacy and should be migrated to GPv1 which provides backward compatibility to Azure classic (ASM) services

General purpose v1 (GPv1)

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘Storage’
  • Does not support Cool and Archive access tier attributes
  • Lower read/write transaction cost compared to GPv2
  • Can be used with Azure classic (ASM) services

General purpose v2 (GPv2)

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘StorageV2’
  • Supports Cool and Archive access tier attributes
  • Access Tier attribute (Hot/Cool) is exposed at the account level
  • Cannot be used with Azure classic (ASM) services
  • Compared to Blob Storage account: GPv2 charge for Cool early deletion but not Cool data writes (per GB)

Blob storage

  • This is of type ‘Microsoft.Storage/storageAccounts’, kind ‘BlobStorage’
  • This is a sub-type of GPv2 that supports only Block Blobs – not Page Blobs, Azure files, …
  • Minor pricing difference: charge for Cool data writes (per GB) but not Cool early deletion
  • Features similar to GPv2:
    • Supports Cool and Archive access tier attributes
    • Access Tier attribute (Hot/Cool) is exposed at the account level
    • Cannot be used with Azure classic (ASM) services

GPv1 and Blob Block accounts can be upgraded to a GPv2 account. This change cannot be reversed.

Set-AzureRmStorageAccount -ResourceGroupName <resource-group> -AccountName <storage-account> -UpgradeToStorageV2

Data Access Tiers

Data Access Tiers are Hot, Cool, and Archive. They represent 3 different physical storage platforms.

For the purpose of simplification, I will refer to LRS pricing in US East Azure region in US dollars for the next 450TB/month.

  • Data Access Tiers are supported in GPv2 accounts only (and the Block Blob storage sub-type) not GPv1
  • A blob cannot be read directly from the Archive tier. To read a blob in Archive, it must first be moved to Hot or Cool.
  • Data in Cool tier is subject to minimum stay period of 30 days. So, moving 1 TB of data to Cool tier for example, obligates the client to paying for that storage for 30 days at a minimum even if the data is moved or deleted before the end of the 30 days. This is implemented via the ‘Early deletion charge’ which is prorated.
  • Data in Archive tier is subject to minimum stay period of 180 days.

Hot

  • This is the regular standard tier where data can be routinely read and written
  • Storage cost is 2 cents per GB per month
  • Lowest read and write IO charges (5 cents per 10k write, 0.4 cents per 10k read)
  • No data retrieval charge (This is the cost to copy data to Hot tier from another tier before it can be read – already in Hot tier)

Cool

  • 25% cheaper storage cost down to 1.5 cents per GB per month
  • More costly read and write IO charges (10 cents per 10k write = 200% Hot, 1 cent per 10k read = 250% Hot)
  • 1 cent per GB data retrieval charge (this is the cost to copy the data to Hot tier, which is a required interim step to read data that resides in Cool tier)

Archive

  • 90% cheaper storage cost down to 0.2 cents per GB per month
  • Most costly read and write IO charges (10 cents per 10k write = 200% Hot, 5 dollars per 10k read = 125,000% Hot)
  • 2 cent per GB data retrieval charge (this is the cost to copy the data to Hot tier, which is a required interim step to read data that resides in Archive tier)
  • The archive tier can only be applied at the blob level, not at the storage account level
  • Blobs in the archive storage tier have several hours of latency (Is this tier using tape not disk!?)

Geo-replication

Geo-replication refers to automatically keeping a copy of a given Storage Account data in another Azure region that’s 400 miles or more away from the primary site. The primary Azure site is the Azure region where the Storage account resides and where the usual read/write transactions occur. The choice of a secondary site is a constant determined by Microsoft and is available if the client chooses the geo-replication feature of a Storage Account. The following is the list of the Azure region pairs as of 18 June 2018

Data in the secondary Azure region cannot be accessed. It is only used under the following conditions:

  • Microsoft declares a region-wide outage. For example East US. This is a Microsoft triggered – not client triggered event
  • Microsoft will first try to restore the service in that region. During that time data is not accessible for read or write. It’s unclear how long will Microsoft pursue this effort before moving to geo-failover.
  • Microsoft initiates geo-failover from the primary to secondary region. That’s all data of all tenants in the primary region.
    • This is essentially a DNS change to re-point the storage end points’ fully qualified domain names to the secondary region
    • The RPO (Recovery Point Objective) is 15 minutes. That’s to say up to 15 minutes worth of data may be lost.
    • The RTO (Recovery Time Objective) is unclear. That’s the time between primary site outage to data availability for read and write on the secondary site
    • At the end of the geo-failover, read/write access is restored (for geo-replicated storage accounts only of course)
    • At a later time, Microsoft will perform a geo-failback which is the same process in the reverse direction
  • This is a process that never happened before. No Azure data center ever sustained a complete loss.
  • It’s unclear when failback will be triggered, whether it will include down-time, or another 15 minute data loss.

A storage account can be configured in one of several choices with respect to geo-replication:

LRS (Locally redundant storage)

When a write request is sent to Azure Storage Account, the transaction is fully replicated on three different physical disks across three fault domains and upgrade domains inside the primary location, then success is returned back to the client.

GRS (Geo-redundant storage)

In addition to triple synchronous local writes, GRS adds triple asynchronous remote writes of each data block.  Data is asynchronously replicated to the secondary Azure site within 15 minutes.

ZRS (Zone-redundant storage)

ZRS is similar to LRS but it provides slightly more durability than LRS (12 9’s instead of 11 9’s for LRS over a given year).

It’s ony available to GPv2 accounts.

RA-GRS (Read access geo-redundant storage)

RA-GRS is similar to GRS but it provides read access to the data in the secondary Azure site.

 

 

 

 

 

 

Azure Cloud Shell


In a prior post I went over getting started with Azure automation using Azure Automation Account. Another way to run PowerShell scripts against an Azure subscription is to use Azure Cloud Shell. I think of Azure Automation Account as PaaS PowerShell whereas Azure Cloud Shell is more like IaaS PowerShell (more like PowerShell web access running on an Azure container).

Access

Azure Cloud Shell can be access from https://shell.azure.com/ or by clicking the Cloud Shell icon in the Azure Portal

Features

We have most of the common PowerShell features such as tab completion. Initially Get-Module shows:

Copy/paste works, although keyboard shortcuts like CTRL-C and CTRL-V do not.

Unlike Azure Automation Account, we can install and import PS modules directly from the shell:

We get drive y: as an Azure file share for persistent storage, since these PS sessions are not..

We have access to more than PowerShell in this shell, such as Azure CLI and Python, which I choose to completely ignore in this post, and focus on PowerShell only 🙂

The Azure: drive provides access to the available subscriptions and their objects:

So, we can list VMs under a given subscription by simply iterating the objects under azure:\subscription_name\VirtualMachines !!

You can tell that the Azure: drive provider is using the required AzureRM cmadlets to fetch the requested objects. In this example, it’s calling Get-AzureRmVM cmdlet

We can upload files to the y: drive Azure file share directly from the Azure portal:

 

Azure Automation – getting started


Azure Automation allows Azure administrators to run PowerShell and other scripts against an Azure subscription. They provide several benefits versus running the same scripts from the user desktop computer including:

  • Scripts run in Azure and are not dependent on the end-user desktop
  • Scripts are highly available by design.
  • Scheduling is a built-in feature
  • Authentication is streamlined for both classic ASM and current ARM resources

To get started with Azure Automation;

  1. Create an Azure Automation account
  2. Install needed PowerShell modules
  3. Create, run, test, schedule scripts

Create an Azure Automation account

In the current portal, Create Resource > Monitoring and Management > Automation > Create

In the ‘Add Automation Account’ blade enter/select a name for the Automation Account, Azure Subscription, Resource Group, and Azure Location

Azure will take a few minutes to create the automation account and associated objects.

We can now run scripts against the Azure subscription selected above. Here are some examples:

Create a test script

In the Automation Account blade, click Runbooks

Click ‘Add a runbook’ link on the top to create a new runbook of type PowerShell

Azure creates the runbook/script, and opens the ‘Edit PowerShell Runbook’ blade

Type in the desired command, click Save, then ‘Test pane’

In the ‘Test’ blade, click ‘Start’. Azure will queue and execute the script

Notes:

  • This is not like the PowerShell ISE. There’s no auto-completion for one thing.
  • If Azure comes across a bad command, it will try to execute THE ENTIRE SCRIPT repeatedly, and is likely to get stuck.
  • This shell does not support user interaction. So, any cmdlet that would typically require a user confirmation/interaction of any type will fail. For example, Install-Module cmdlet will fail since it requires user approval/interaction to install PowerShellGet.

Install needed modules

To see available modules click ‘Modules’ in the Automation Account blade

Click ‘Browse Gallery’ on top and search for the desired module

These modules come for the Microsoft PowerShell Gallery.

Click on the desired module, view its functions, and click Import to import it to this automation shell

Now that the module is imported, we can use it in scripting in this particular automation shell:

 

 

Azure Data Box


Azure Data Box is Microsoft’s parallel of AWS’  Snowball Edge or/and Google Transfer Appliance. It’s the evolution of Azure Import/Export service that allows a client to use client-provided disk to import/export data to/from Azure. As of April 2018;

Basic information

  • Azure Data Box is a 45 lb. NAS
  • 80 TB Usable Storage / 100 TB Physical Storage
  • Cost $80 + shipping both ways + Egress charges if exporting from Azure
  • 7-10 days processing time from device receipt date

Use Case

File share transfer/initial seeding

  • Azure Data Box connects to the client network as an IP NAS
  • Client uses WAImportExport free tool to copy BitLocker encrypted data to the Azure Data Box at local on-premises LAN speeds
  • At LAN speed of 1 Gbps and sustained local disk transfer rate of 128 MB/s, it takes about 7.5 days to copy 80 TB of data, provided the client does not require/impose copy throttling to maintain adequate data access performance.

Benefits/Limitations

  • Azure Data Box provides a free short term rental of a 100TB NAS. Alternatively a client can use their own disk(s) or NAS devices to transfer data to Azure using the same steps.
  • Azure Data Box is not intended for VM replication or migration.
  • It takes about 3 weeks between start of 80TB local data copy on-premises to data being available in Azure Storage Account (1+ week for local copy, 1+ week for Microsoft processing + shipping time)
  • Azure Data Box is useful in case of transferring mostly static data from on-premises to Azure. Daily data change rate must be below 5% (otherwise 100% of the data could have changed during the 20 day between start copy and data available in Azure)

 

Expand-Json cmdlet to expand custom PowerShell object in a more readable format added to AZSBTools PowerShell module


Microsoft Azure REST API version 2 (ARM – Azure Resource Manager) takes input request body and returns output in JSON format. Consequently, Azure PowerShell cmdelts and Azure CLI tend to use similar JSON objects for input, also known as ARM Templates.

For example, using this PowerShell cmdlet:

Get-AzureRmResource -ResourceId /subscriptions/xxxxx/resourceGroups

where xxxxx is your Azure subscription Id, may return output similar to:

Name : prod-mgt
ResourceId : /subscriptions/xxxxx/resourceGroups/prod-mgt
ResourceGroupName : prod-mgt
Location : eastus
SubscriptionId : xxxxx
Properties : @{provisioningState=Succeeded}

Name : TestAuto1
ResourceId : /subscriptions/xxxxx/resourceGroups/TestAuto1
ResourceGroupName : TestAuto1
Location : westeurope
SubscriptionId : xxxxx
Properties : @{provisioningState=Succeeded}

What the PowerShell cmdlet did is to send a GET request to the Azure Management API that looks partially like:

https://management.azure.com/subscriptions/xxxxx/resourceGroups?api-version=2014-04-01

Which returned JSON output similar to:

{
  "value": [
    {
      "id": "/subscriptions/xxxxx/resourceGroups/prod-mgt",
      "name": "prod-mgt",
      "location": "eastus",
      "properties": {
        "provisioningState": "Succeeded"
      }
    },
    {
      "id": "/subscriptions/xxxxx/resourceGroups/TestAuto1",
      "name": TestAuto1
      "location": "westeurope",
      "properties": {
        "provisioningState": "Succeeded"
      }
    }
  ]
}

In the course of working with Azure ARM templates, such as this template to create a Storage Account:

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "storageAccountType": {
      "type": "string",
      "defaultValue": "Standard_LRS",
      "allowedValues": [
        "Standard_LRS",
        "Standard_GRS",
        "Standard_ZRS",
        "Premium_LRS"
      ],
     "metadata": {
       "description": "Storage Account type"
     }
   }
  },
  "variables": {
    "storageAccountName": "[concat(uniquestring(resourceGroup().id), 'standardsa')]"
  },
  "resources": [
    {
      "type": "Microsoft.Storage/storageAccounts",
      "name": "[variables('storageAccountName')]",
      "apiVersion": "2016-01-01",
      "location": "[resourceGroup().location]",
      "sku": {
        "name": "[parameters('storageAccountType')]"
      },
      "kind": "Storage", 
      "properties": {
      }
    }
  ],
  "outputs": {
    "storageAccountName": {
      "type": "string",
      "value": "[variables('storageAccountName')]"
    }
  }
}

It may not be very clear what are the objects in the template and their hierarchy. Using the ConvertFrom-Json cmdlet of the Microsoft.PowerShell.Utility module produces a PS custom object with display similar to:

Get-Content E:\Scripts\ARMTemplates\Storage1.json | ConvertFrom-Json

$schema : https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#
contentVersion : 1.0.0.0
parameters : @{storageAccountType=}
variables : @{storageAccountName=[concat(uniquestring(resourceGroup().id), ‘standardsa’)]}
resources : {@{type=Microsoft.Storage/storageAccounts; name=[variables(‘storageAccountName’)]; apiVersion=2016-01-01; location=[resourceGroup().location]; sku=; kind=Storage; properties=}}
outputs : @{storageAccountName=}

This is better but it doesn’t show some of the information in the source JSON file/ARM template. The new Expand-Json cmdlet further expands the ConvertFrom-Json output:

Get-Content E:\Scripts\ARMTemplates\Storage1.json | ConvertFrom-Json | Expand-JSON


To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type

$PSVersionTable

To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Install-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -Force

AZSBTools contains functions that depend on POSH-SSH, SB-Tools, and AzureRM modules, and they’re typically installed together.

To load the POSH-SSH, SB-Tools, AZSBTools, and AzureRM modules type:

Import-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in SB-Tools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show