Archive for October, 2014

Powershell script/function to expand system/boot disk on Windows XP/2003 VMs

In Windows 7 and Server 2008 virtual machines and above, expanding the boot/system disk is a simple matter of expanding it in the hypervisor, then expanding it in the guest OS in the Computer Management/Disk Management GUI. In Windows XP/Server 2003 guest VMs, expanding the boot/dystem disk is not available via native Windows tools.

This script leverages Server 2012 R2 hypervisor capabilities to expand boot/system disk on a guest VM running Windows XP/2003 OS. The script leaves a log file behind listing steps taken. The script can be downloaded from the Microsoft Script Center Repository.

To use it: Download the attached file, unblock it, adjust PS execution policy as needed, run the script to load the function in memory, then use this line to get detailed help and examples as shown below:


Note: The script will shutdown the VM during this process.

For example, if you down the VM, and expand the disk in Hyper-V Manager GUI:


In the VM, you cannot expand the boot/system partition with native Windows 2003/XP tools:


You can do that with this script. On the Hyper-V host where the VM is running, run:

Expand-C -VMName MyVM1 -Size 17GB -BackupPath "d:\save"

The script will

  • Backup the VHDX file before expanding it if the ‘BackupPath’ is used
  • Convert the file from VHD to VHDX format if it was a VHD file. In this case you’ll need to delete the old .vhd file manually.
  • Down the VM (gracefully)
  • Expand the VHDX file
  • Expand the partition
  • Re-attach the C: drive disk file to the VM
  • Start the VM Diskc8
  • Leave a log file listing the steps taken Diskc9



GUI or no GUI !?

The GUI (Graphical User Interface) is just a Windows feature in Server 2012. You can add it and remove it as needed. This also applies to Server 2012 R2 and Windows Server 10TP.

Server 2012 comes with 4 levels of GUI:

  1. No GUI = Core
  2. Minimal GUI = Server-Gui-Mgmt-Infra feature
  3. Regular GUI = minimal + Server-Gui-Shell (default if you install as GUI)
  4. Full GUI = regular + Desktop-Experience Core04

In Powershell the GUI options are displayed as:


If the server ever had the GUI installed, then the bits are there (under C:\Windows\WinSxS by default). If this a Core install and has never had a GUI before, then the bits are likely to be missing as well.

To check whether the bits are there or not:

$ComputerName = "MyCoreServer"
$Session = New-PSSession -ComputerName $ComputerName
Enter-PSSession -Session $Session
Get-WindowsFeature | Where { $_.Installed }

These commands will enter a remote PS session with the Core server, and list installed features.

This command will check for the 2 features we need to have the GUI:

Get-WindowsFeature | 
    where { $_.Name -eq "Server-Gui-Mgmt-Infra" -or 
            $_.Name -eq "Server-Gui-Shell" }

If the result looks like:


Removed = not installed AND the bits are missing.

We need the file from the WS 2012 media. Mount the install CD.

Next, identify which drive letter is your DVD drive, run:


Next, identify the index number of the installation media needed, run:

Get-WindowsImage -ImagePath D:\sources\install.wim

This should display:


The server version I’m working with in this example is DataCenter, so the media I need is index #4, run:

Install-WindowsFeature -Name "Server-Gui-Mgmt-Infra","Server-Gui-Shell" -source:wim:d:\sources\install.wim:4 

Core09Reboot, and you got GUI.


To remove the GUI later, run:

Remove-WindowsFeature -Name "Server-Gui-Mgmt-Infra","Server-Gui-Shell"


Powershell script to reduce dynamic VHDX disk size

Have you ever been in the situation where you have a dynamic VHDX disk where you cleaned up some space by deleting unneeded files, but the VHDX file size on the underlying disk remains the same? Take this example: I started with this test disk:


Then I converted it to dynamic:

Dismount-VHD -DiskNumber 13 -Confirm:$false
Convert-VHD -Path 'd:\Fixed1.vhdx' -DestinationPath 'd:\Dynamic1.vhdx' -VHDType Dynamic

Then I copied some files taking about 2.6 GB of the test 4 GB disk:


Then I deleted about 1.9 GB worth of files:


Yet, the VHDX file still takes the same space on the underlying disk:


The following script zeros out unused space on the VHDX file, and compacts it. It can be downloaded from the Microsoft Script Center Repository. The script works on both VHD and VHDX files.

To use it, download the .rar file, decompress it, unblock the 2 files, adjust PS execution policy as needed, run the script to load the function in memory, then use it.

To see help use:

Help Compact-VHDX -Full


Compact-VHDX -VHDXPath D:\Dynamic1.vhdx -SDelete .\sdelete.exe

The disk to be compacted must be dismounted first. If the script is run on a disk that’s already mounted, you’ll get a message like:


Otherwise, the script will run, giving output similar to:


This may take a while depending on the speed of the underlying disk system and the size of the disk being compacted.

When done the output will look like:


The reduced file size can be confirmed in Windows Explorer as well:


Script output is also saved to log file:


Options for using a Veeam Backup Repository on an Azure Virtual Machine

January 2015 update:

In December 2014, Microsoft announced the Public Preview Availability release of Azure Premium Storage. See this post for details on Azure Premium Storage features. What does that mean in terms of using Azure for a Veeam Backup Repository, or for Veeam Cloud Connect?

  • Maximum disk capacity per VM remains a bottleneck at 32 TB.
    • Only D14 VM size at this time can have 32x 1TB Page Blob disks. It comes with 16 cores, 112 GB RAM, 127 GB SAS system disk, 800 GB SSD non-persistent temporary drive ‘d’ that delivers 768 MB/s read or 384 MB/s write throughput. Base price for this VM is $1,765/month
    • If using 32 Standard (spinning SAS) disks, set as a 16-column single simple storage space for maximum space and performance, we get a 32 TB data disk that delivers 960 MB/s throughput or 8k IOPS (256 KB block size).
      • 32x 1TB GRS Standard (HDD) Page Blobs cost $2,621/month
      • 32x 1TB LRS Standard (HDD) Page Blobs cost $1,638/month
    • If using 32 Premium (SSD) disks, set as a 16-column single simple storage space for maximum space and performance, we get a 32 TB data disk that delivers 3,200 MB/s throughput or 80k IOPS (256KB block size). Premium SSD storage is available as LRS only. The cost for 32x 1TB disks is 2,379/month
  • If using a D14 size VM with Cloud Connect, setting up the Veeam Backup and Replication 8, WAN Accelerator, and CC Gateway on the same VM:
    • 16 CPU cores provide plenty adequate processing for the WAN Accelerator which is by far the one component here that uses most CPU cycles. It’s also plenty good for SQL 2012 Express used by Veeam 8 on the same VM.
    • 112 GB RAM is an overkill here in my opinion. 32 GB should be plenty.
    • 800 GB SSD non-persistent temporary storage is perfect for the WAN Accelerator global cache. WAN Accelerator global cache disk must be very fast. The only problem is that it’s non-persistent, but this can be overcome by automation/scripting to maintain a copy of the WAN Accelerator folder on the ‘e’ drive 32 TB data disk or even on an Azure SMB2 share.
    • In my opinion, cost benefit analysis of Premium SSD Storage for the 32-TB data disk versus using Standard SAS Storage shows that Standard storage is still the way to go for Veeam Cloud Connect on Azure. It’s $740/month cheaper (31% less) and delivers 960 MB/s throughput or 8k IOPS at 256KB block size which is plenty good for Veeam.


10/20/2014 update:

Microsoft announced a new “Azure Premium Storage”. Main features:

  • SSD-based storage (persistent disks)
  • Up to 32 TB of storage per VM – This is what’s relevant here. I wonder why not extend that capability to ALL Azure VMs??
  • 50,000 IOPS per VM at less than 1 ms latency for read operations
  • Not in Azure Preview features as of 10/21/2014. No preview or release date yet.

High level Summary:

Options for using Veeam Backup Repository on an Azure Virtual Machine include:

  1. Use Standard A4 VM with 16TB disk and about 300 Mbps throughput (VM costs about $6.5k/year)
  2. Use a small Basic A2 VM with several Azure Files SMB shares. Each is 5 TB, with 1 TB max file size, and 300 Mbps throughput.

Not an option:

An Azure subscription can have up to 50 Storage Accounts (as of September 2014), (100 Storage accounts as of January 2015) at 500TB capacity each. Block Blob storage is very cheap. For example, the Azure price calculator shows that 100TB of LRS (Locally Redundant Storage) will cost a little over $28k/year. LRS maintains 3 copies of the data in a single Azure data center.


However, taking advantage of that vast cheap reliable block blob storage is a bit tricky.

Veeam accepts the following types of storage when adding a new Backup Repository:


I have examined the following scenarios of setting up Veeam Backup Repositories on an Azure VM:

1. Locally attached VHD files:

In this scenario, I attached the maximum number of 2 VHD disks to a Basic A1 Azure VM, and set them up as a Simple volume for maximum space and IOPS. This provides a 2TB volume and 600IOPS according to Virtual Machine and Cloud Service Sizes for Azure. Using 64 KB block size:



This short script shows block size (allocation unit) for drive e: used:

$DriveLetter = "e:"
$BlockSize = (Get-WmiObject -Query "SELECT BlockSize FROM Win32_Volume WHERE DriveLetter='$DriveLetter'").BlockSize/1KB
Write-Host "Allocation unit size on Drive $DriveLetter is $BlockSize KB" -ForegroundColor Green

This should come to 4.7 MB/s (37.5 Mbps) using the formula

IOPS = BytesPerSec / TransferSizeInBytes

But actual throughtput was about 2.5 MB/s (20 Mbps) as shown on the VM:


and in the Azure Management Portal:


Based on these results, I expect a Standard A4 Azure VM when configured with 16TB simple (striped) disk, with max 8k IOPS will actually deliver about 35 MB/s or 300 Mbps.


2. Using Azure Files:

Azure Files is a new Azure feature that provides SMB v2 shares to Azure VMs with 5TB maximum per share and 1TB maximum per file.

Testing showed that throughput upwards of 100 Mbps. Microsoft suggests that Azure Files throughput is up to 60 MB/s per share.



Although this option provides adequate bandwidth, its main problem is that it has maximum 1 TB file size which means maximum backup job is not to exceed 1 TB which is quite limiting in large environments.

DOT NET 3.5 for Windows 10 Technical Preview

DOT NET 3.5 was often hard to install on Windows 8, 8.1, 2012, and 2012 R2. I tested installing it on Windows 10 Technical Preview for Enterprise:


In Control Panel\Programs\Programs and Features, click turn Windows features on or off, and check the “.NET Framework 3.5” box and click OK


Click “Download files from Windows Update”






It just works!

Same can be done via the DISM command line utility:

DISM /Online /Enable-Feature /FeatureName:NetFx3 /All


Again it works just fine!

Using SysPrep with Windows 10 Technical Preview for Enterprise

SysPrep.exe is a tool located under c:\windows\system32\SysPrep folder. It can be used to “generalize” a Windows installation to be used for automated deployment instead of doing every fresh install from the ISO media.

After doing a fresh install of Windows 10 Technical Preview for Enterprise as a Gen 2 VM on Server 2012 R2 Hyper-V host, I ran Windows updates, enabled Remote Desktop, installed 2 Windows updates updates, rebooted, installed RSAT, then I ran SysPrep.exe


I chose to “Generalize” and “Shutdown”, so that I can copy the VM’s VHDX file to be used for other Windows 10 Technical Preview machine deployments.

Sysprep will prepare the system and shutdown the computer.

After copying the VHDX file, I restarted the VM:


Set Region and Language, accept license agreement, skip product key:



Although the product key was skipped, later on after the machine was setup and rebooted, Windows auto-activated. No need to enter a product key or manually activate:


Windows 10 Technical Preview and Windows 10 Technical Preview for Enterprise license expires April 15, 2015.

Enter a user name and password:



  • Do NOT join the domain prior to running Sysprep. It will fail with “Fatal error” similar to this: Win10e32
  • Currently Sysprep does not let you use a local account that was setup prior to running SysPrep. You will have to setup another local account when using the SysPrep’d image
  • Local Administrator account is disabled by default on Windows 10 Technical Preview


Remote Server Administration Tools (RSAT) for Windows 10 Technical Preview

Remote Server Administration Tools (RSAT) for Windows 10 Technical Preview are now available for download. Download and install the version that matches your Windows 10 installation (x86 or x64)


I wanted to know what Powershell modules this will add to a fresh Windows 10 Technical Preview for Enterprise. So I used this short script:


Get-Module -ListAvailable | 
 Select name,version,ModuleType,ClrVersion,PowershellVersion | 
 Export-Csv .\Win10b-modules.csv -NoTypeInformation

# Install RSAT tools for Windows 10 Technical Preview

Get-Module -ListAvailable | 
 Select name,version,ModuleType,ClrVersion,PowershellVersion | 
 Export-Csv .\Win10c-modules.csv -NoTypeInformation

Compare-Object -ReferenceObject (Import-Csv .\Win10b-modules.csv).name `
 -DifferenceObject (Import-Csv .\Win10c-modules.csv).name

The first 3 lines save a list of the installed PS modules in a CSV file.

Next I installed RSAT:


then accepted the license agreement.


Installation finished successfully. This added shortcuts to Server Manager tool (c:\windows\system32\ServerManager.exe) on the start menu and under All Apps


A comparison of the PS module before and after the installation showed that RSAT added the following 19 modules:


If you encounter errors with Sysprep see this KB article. It’s for Windows 8 but it applies to Windows 10 Technical Preview as well.

1/8/2015 – Window Technical Preview build 9879:

Some may have access to the new Windows Technical Preview build 9879 available in the MSDN subscription secure download site.



I’ve done a fresh install of Windows Technical Preview build 9879, and attempted to install RSAT normally. That just worked:


In another fresh install of WinTP 9879 I tried using DISM:


That completed successfully as well.

Some have reported errors attempting to install RSAT for Windows TP. I’ve downloaded the latest Windows TP ISO and did a fresh install as a Gen 2 virtual machine on Hyper-V 2012 R2. I downloaded and installed RSAT without any issue. I wan not able to replicate the problem. However, here’s another way to try to install it:

Download the WindowsTH-KB2693643-x64.msu file as usual – save it to the default location under ‘downloads’

Run the following script in Powershell_ISE (as administrator – elevated permissions)


$Path = "$env:USERPROFILE\downloads\WindowsTH-KB2693643-x64.msu"
# Exctract .cab file
$Target = "$env:USERPROFILE\downloads\RSAT"
wusa.exe $Path /extract:$Target
# Install via DISM
$CAB = "$env:USERPROFILE\downloads\RSAT\"
Dism.exe /Online /Add-Package /PackagePath:$CAB



Windows 10 Technical Preview for Enterprise updates

On 10/2/2014 I ran Windows Updates on a plain vanilla installation of Windows 10  Technical Preview for Enterprise. It found 1 update:


I installed, and rebooted


It took a little longer than expected for a Gen 2 VM. After reboot, I looked for installed updates, and found 2 (!):


The first update is KB3001512 which addresses these issues:

  • In Windows Technical Preview, certain devices do not receive firmware or driver updates.
  • Adobe Flash update does not contain premium video playback.
  • Some problems in the Compatibility View list for Internet Explorer 11

The second update is KB3002675. Powershell shows that its information link is but this seems to be a broken link at this time..


CloudBerry Drive Caching in Azure Virtual Machines and Azure Storage

CloudBerry Drive Server for Windows Server is a tool by CloudBerry that makes cloud storage available on a server as a drive letter. I have examined 10 different tools to perform this task, and CloudBerry drive provided the most functionality. The use case I was after is the ability to upload large files from on-prem servers to Azure VMs. Specifically, I’m testing Veeam Cloud Connect with Azure, which allows for off-site backup to Azure. The backup files are multi-TB each.

However, digging deeper into how CloudBerry drive works showed that CloudBerry Drive caches each received file to a local folder on the VM. According to CloudBerry support this is a must and cannot be turned off. This poses several problems:

  1. It defeats the purpose of using CloudBerry in the first place. An Azure VM (as of 10/2/2014) can have a maximum of 16 TB of local storage which is implemented as 16x 1TB VHD files (page blobs). The point of using CloudBerry Drive is to be able to access Azure block blob storage with has a 500 TB maximum per storage account.
  2. It puts a file size limit equivalent to the maximum amount of space on the local drive used for CloudBerry caching.
  3. CloudBerry Drive then takes the uploaded file from the cache folder and copies it to the Azure block blob storage account. CloudBerry3
    1. This makes the destination file in Azure block blob storage locked and unavailable for many hours during that 2nd copy process. For example, if the Veeam cloud backup job successfully backed up 10 out of 12 VMs, and we retry the remaining 2 VMs, the job will fail since the destination file in Azure is locked by CloudBerry
    2. The 2nd copy uses a great amount of read IOPS from the local drive (Page Blobs), and write IOPS to the destination Block Blob storage. Which makes any other task on the VM like another backup job not practically possible even if it is a different backup job is using other unlocked files, because CloudBerry is using up all available IOPS on the VM for hours or even days
    3. The copy incurs transnational, IOPs, and bandwidth charges on an Azure VM unnecessarily
    4.  There are better ways to copy data within the same Azure Storage account that are much more efficient and much less costly, such as instantaneous shadow copies..


CloudBerry Drive Server for Windows Server caches files locally which makes it not suitable for use on Azure VMs.

Windows 10 Technical Preview – Hyper-V Integration

After installing Windows 10 Technical Preview on Windows Server 2012 R2, I checked the integration service. In Hyper-V Manager on the 2012 R2 host, all looked normal:


Running the following Powershell command on the 2012 R2 Hyper-V server showed that the Windows 10 Technical Preview VM comes with Integration Service version 6.4.9841



This command shows more details:


Running this comparison between Integration services for the this Windows 10 VM and a Server 2012 R2 VM:

Compare-Object -ReferenceObject (Get-VMIntegrationService -VMName v-Win10a | select *) -DifferenceObject (Get-VMIntegrationService -VMName v-2012R2-G2a | select *)

showed no difference (This does not show or compare integration service version.)