New-SBAZServicePrincipal cmdlet to create new Azure AD Service Principal added to AZSBTools PowerShell module

For the use case of running PowerShell scripts that perform tasks on objects in an Azure subscription, we need to be able to run such scripts under a user context other than the script author which is what typically happens during script development. A Service Principal is an Azure AD user intended for this purpose. The New-SBAZServicePrincipal function automates and simplifies the process of creating an Azure Service principal.


The New-SBAZServicePrincipal function takes the following parameters


This parameter accepts one or more Service Principal names


This parameter accepts a value that represents which Azure cloud to create the SPN in. This parameter default to Azure Commercial cloud. As of 15 March 2018 that list is:

  • AzureCloud
  • AzureUSGovernment
  • AzureChinaCloud
  • AzureGermanCloud

To see the current list, use: (Get-AzureRMEnvironment).Name


This parameter is used to assign Role/Permissions for the Service Principal in the current subscription.
The default value is ‘Owner’ role.
As of 16 March 2018 the following default roles are defined:
API Management Service Contributor
Application Insights Component Contributor
Automation Operator
BizTalk Contributor
Classic Network Contributor
Classic Storage Account Contributor
Classic Storage Account Key Operator Service Role
Classic Virtual Machine Contributor
ClearDB MySQL DB Contributor
Cosmos DB Account Reader Role
Data Factory Contributor
Data Lake Analytics Developer
DevTest Labs User
DNS Zone Contributor
DocumentDB Account Contributor
Intelligent Systems Account Contributor
Log Analytics Contributor
Log Analytics Reader
Network Contributor
New Relic APM Account Contributor
Redis Cache Contributor
Scheduler Job Collections Contributor
Search Service Contributor
Security Manager
SQL DB Contributor
SQL Security Manager
SQL Server Contributor
Storage Account Contributor
Storage Account Key Operator Service Role
Traffic Manager Contributor
User Access Administrator
Virtual Machine Contributor
Web Plan Contributor
Website Contributor
For more details on roles, type in:

Get-AzureRmRoleDefinition | select name,description,actions | Out-GridView


The New-SBAZServicePrincipal function returns a PS Object for each input Service Principal Name containing the following properties:


The New-SBAZServicePrincipal function performs the following tasks for each provided Service Principal name:

  1. Create/Validate Azure AD App. The Azure AD App is required to create a Service Principal. It carries the same name and has an initial URL matching the same name as well
  2. Create/Validate Azure AD Service Principal. The user is prompted to enter the desired password for the SPN. The password is encrypted and saved in the user’s temp folder for use with future automations
  3. Assign the provided Role to the SPN for the current subscription. By default this is the ‘Owner’ role. This allows the created SPN to perform all tasks against the current subscription.

Registered Apps can be also viewed in the Azure portal under Azure Active Directory/App Registrations blade:


$SPList = New-SBAZServicePrincipal -ServicePrincipalName PowerShell01,samtest1

This example creates 2 Service Prinsipals; PowerShell01 and samtest1 in the default Azure Commercial cloud, and assigns them the default Owner Role in the current subscription.

The New-SBAZServicePrincipal function first pops the Azure login Window to identify which subscription to use:

This function has been tested with both Azure Commercial and Azure US GOV clouds.

Next enter the desired password for each of the 2 provided Service Principals:

The function saves the encrypted password to the user temp folder for future use/automation.

It also display console output similar to:

The Service Principals can be used now to run other PowerShell scripts

The newly registered/validated Apps can also be viewed from the Azure Portal

To use the AZSBTools PowerShell module which is available in the PowerShell Gallery, you need PowerShell 5. To view your PowerShell version, in an elevated PowerShell ISE window type


To download and install the latest version of AZSBTools from the PowerShell Gallery and its dependencies, type

Install-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -Force

AZSBTools contains functions that depend on POSH-SSH, SB-Tools, and AzureRM modules, and they’re typically installed together.

To load the POSH-SSH, SB-Tools, AZSBTools, and AzureRM modules type:

Import-Module POSH-SSH,SB-Tools,AZSBTools,AzureRM -DisableNameChecking

To view a list of cmdlets/functions in SB-Tools, type

Get-Command -Module AZSBTools

To view the built-in help of one of the AZSBTools functions/cmdlets, type

help <function/cmdlet name> -show

such as

help New-SBAZServicePrincipal -show


StorSimple 8k software release 4.0

Around mid February 2017, Microsoft released StorSimple software version 4.0 (17820). This is a release that includes firmware and driver updates that require using Maintenance mode and the serial console.

Using this PowerShell script to save the Version 4.0 cmdlets and compare them to Version 3.0, I got:


Trying the new cmdlets, the Get-HCSControllerReplacementStatus cmdlet returns a message like:


The Get-HCSRehydrationJob returns no output (no restore jobs are running)

The Invoke-HCSDisgnostics seems pretty useful and returns output similar to:


The cmdlet takes a little while to run. In this case it took 14 minutes and 38 seconds:


It returns data from its several sections like;

System Information section:


This is output similar to what we get from the Get-HCSSystem cmdlet for both controllers.

Update Availability section:


This is output similar to Get-HCSUpdateAvailability cmdlet, although the MaintenanceModeUpdatesTitle property is empty !!??


Cluster Information section:


This is new exposed information. I’m guessing this is the output of some Get-HCSCluster cmdlet, but this is pure speculation on my part. I’m also guessing that this is a list of clustered roles in a traditional Server 2012 R2 failover cluster.

Service Information section:


This is also new exposed information. Get-Service is not an exposed cmdlet.

Failed Hardware Components section:


This is new exposed information. This device is in good working order, so this list may be false warnings.

Firmware Information section:


This output is similar to what we get from Get-HCSFirmwareVersion cmdlet

Network Diagnostics section:


Most of this information is not new, but it’s nicely bundled into one section.

Performance Diagnostics section:


Finally, this section provides new information about read and write latency to the configured Azure Storage accounts.

The full list of exposed cmdlets in Version 4.0 is:


StorSimple 8k series as a backup target?

19 December 2016

After a conference call with Microsoft Azure StorSimple product team, they explained:

  •  “The maximum recommended full backup size when using an 8100 as a primary backup target is 10TiB. The maximum recommended full backup size when using an 8600 as a primary backup target is 20TiB”
  • “Backups will be written to array, such that they reside entirely within the local storage capacity”

Microsoft acknowledge the difficulty resulting from the maximum provisionable space being 200 TB on an 8100 device, which limits the ability to over-provision thin-provisioned tiered iSCSI volumes when expecting significant deduplication/compression savings with long term backup copy job Veeam files for example.


  • When used as a primary backup target, StorSimple 8k devices are intended for SMB clients with backup files under 10TB/20TB for the 8100/8600 models respectively
  •  Compared to using an Azure A4 VM with attached disks (page blobs), StorSimple provides 7-22% cost savings over 5 years

15 December 2016

On 13 December 2016, Microsoft announced the support of using StorSimple 8k devices as a backup target. Many customers have asked for StorSimple to support this workload. StorSimple hybrid cloud storage iSCSI SAN features automated tiering at the block level from its SSD to SAS to Azure tiers. This makes it a perfect fit for Primary Data Set for unstructured data such as file shares. It also features cloud snapshots which provide the additional functionality of data backup and disaster recovery. That’s primary storage, secondary storage (short term backups), long term storage (multiyear retention), off site storage, and multi-site storage, all in one solution.

However, the above features that lend themselves handy to the primary data set/unstructured data pose significant difficulties when trying to use this device as a backup target, such as:

  • Automated tiering: Many backup software packages (like Veeam) would do things like a forward incremental, synthetic full, backup copy job for long term retention. All of which would scan/access files that are typically dozens of TB each. This will cause the device to tier data to Azure and back to the local device in a way that slows things down to a crawl. DPM is even worse; specifically the way it allocates/controls volumes.
  • The arbitrary maximum allocatable space for a device (200TB for an 8100 device for example), makes it practically impossible to use the device as backup target for long term retention.
    • Example: 50 TB volume, need to retain 20 copies for long term backup. Even if change rate is very low and actual bits after deduplication and compression of 20 copies is 60 TB, we cannot provision 20x 50 TB volumes, or a 1 PB volume. Which makes the maximum workload size around 3TB if long term retention requires 20 recovery points. 3TB is way too small of a limit for enterprise clients who simply want to use Azure for long term backup where a single backup file is 10-200 TB.
  • The specific implementation of the backup catalog and who (the backup software versus StorSimple Manager service) has it.
  • Single unified tool for backup/recovery – now we have to use the backup software and StorSimple Manager, which do not communicate and are not aware of each other
  • Granular recoveries (single file/folder). Currently to recover a single file from snapshot, we must clone the entire volume.

In this article published 6 December 2016, Microsoft lays out their reference architecture for using StorSimple 8k device as a Primary Backup Target for Veeam


There’s a number of best practices relating to how to configure Veeam and StorSimple in this use case, such as disabling deuplication, compression, and encryption on the Veeam side, dedicating the StorSimple device for the backup workload, …

The interesting part comes in when you look at scalability. Here’s Microsoft’s listed example of a 1 TB workload:


This architecture suggests provisioning 5*5TB volumes for the daily backups and a 26TB volume for the weekly, monthly, and annual backups:


This 1:26 ratio between the Primary Data Set and Vol6 used for the weekly, monthly, and annual backups suggests that the maximum supported Primary Data Set is 2.46 TB (maximum volume size is 64 TB) !!!???


This reference architecture suggests that this solution may not work for a file share that is larger than 2.5TB or may need to be expanded beyond 2.5TB

Furthermore, this reference architecture suggests that the maximum Primary Data Set cannot exceed 2.66TB on an 8100 device, which has 200TB maximum allocatable capacity, reserving 64TB to be able to restore the 64TB Vol6


It also suggests that the maximum Primary Data Set cannot exceed 8.55TB on an 8600 device, which has 500TB maximum allocatable capacity, reserving 64TB to be able to restore the 64TB Vol6


Even if we consider cloud snapshots to be used only in case of total device loss – disaster recovery, and we allocate the maximum device capacity, the 8100 and 8600 devices can accommodate 3.93TB and 9.81TB respectively:



Although the allocation of 51TB of space to backup 1 TB of data resolves the tiering issue noted above, it significantly erodes the value proposition provided by StorSimple.


Deploying StorSimple On-Premises Virtual Array (OVA) via GUI tools

The StorSimple model 1200 OVA (On-Premises Virtual Array) is available as VHD/VHDX or VMDK file to be deployed on local Hyper-V or VMWare hypervisor.

Note that the StorSimple OVA model 1200 is incompatible with the StorSimple 8k series physical and virtual devices (8100, 8600, 8010, 8020). This means we cannot recover volumes from 8k device to a 1200 OVA device or vice versa.

1. Deploy ‘Virtual Device Series’ StorSimple Manager:

You cannot deploy an OVA under your ‘Physical Device Series’ StorSimple Manager service. To deploy a ‘Virtual Device Series’ StorSimple Manager follow these steps in the classic portal:


Uncheck the box at the bottom to create a Storage Account.

Note that OVA is available on the following Azure regions as of 20 October 2016:

  • Australia East
  • Australia Southeast
  • Brazil South
  • East Asia
  • Southeast Asia
  • East US
  • West US
  • Japan East
  • Japan West
  • North Europe
  • West Europe

Enter a name for your StorSimple Manager service.

2. Create a Storage Account

I prefer to manually create a Storage Account instead of having one created automatically, to be able to give it a name that makes sense for the deployment, and becomes easy to identify and recognize later on.


Make sure the Storage Account is in the same Azure region as the StorSimple Manager service.

3. Download the OVA image file

Under the new StorSimple Manager service/devices/create virtual device:


You’ll see a page like:


Click the link under item #1 that corresponds to your hypervisor to download the OVA file. Copy the Registration Key at the bottom. It will be used later in step x to register the OVA with the StorSimple Manager service.

Extract the .ZIP file


4. Provision a VM for the OVA:

I’m using Hyper-V on Server 2012 R2 in this example. Minimum VM specs: 4 cores, 8 GB of RAM, 500 GB disk space for drive c: (system disk).


Gen 2 is supported and recommended when using VHDX image on Server 2012 R2


According to Microsoft, dynamic memory is not supported 😦ova08

Connect to the Hyper-V switch of your choice. Use the downloaded disk:


Click Next and Finish. Go back the VM settings/Processor, and select to use 4 cores:


Add a second disk to the VM under the SCSI controller. Set it as 500 GB dynamically expanding disk.


Start the VM and login to it. This takes several minutes. The default user is StorSimpleAdmin and the default password is Password1. Login and change the password (8 character minimum). The OVA image has a Core version of Server 2012 R2, and if DHCP server is available it picks up an IP address:


An extremely limited set of commands is available:


However, Microsoft has made the support mode available without the need for a decryption tool:


This exposes the entire PowerShell capabilities for admins to manage the device.


This is really a good decision on Microsoft’s part. The current local web interface has many idiosyncrasies that can be frustrating for a device admin. Having the option to manage the device via PowerShell goes a long way towards faster device adoption and customer satisfaction in my opinion and experience.

Although not required, I recommend using a static MAC address for the OVA VM. To do so shut down the VM from Hyper-V Manager, then under settings\network\advanced, select static MAC:


Start the VM.

5. Configure the OVA via the local web interface

Browse to the OVA IP address, and bypass the local certificate warning in the browser. Login with the new password you created in the prior step.


Under configuration/network settings, I recommend using a static IPv4 address



  • By default, the OVA will attempt to get IP address if there’s DHCP server in the environment.
  • To view/change the IP address configuration in the local web interface, if you try to change the DNS server IP it will error out. A work around is to change it back to DHCP, apply, browse to the DHCP IP, login, change it back to static IP and make all the needed changes in one step. In other words, you must change IP address and DNS server address in one step or it fails to accept the changes’
  • There’s no way to remove IPv6 information in the local web interface

Browse to the new IP address to continue. For this post I’m using the device as an iSCSI SAN. I settled on leaving the device in ‘workgroup’:



I was unable to join an on-premises AD domain:


and entered credentials as:


But got the error message “Domain does not exist”!!??


I attempted to use the NetBIOS names (sam1 domain and sam1\administrator user) but got the same error.

I verified connectivity between the DC and the OVA, running these commands on the DC:


I also verified that the DC is responding to DNS queries. I ran the following command from a 3rd computer:


I skipped Proxy settings, since I’m not using a proxy to get to the Internet in this environment.

Interestingly enough, Time Server settings accepted the local DC with no problem:


Finally, I registered the device with the StorSimple Manager Service by entering the Service Registration Key. This was my first device on this StorSimple Manager Service, so I recorded the Service Data Encryption Key received upon successful registration.


  • If this is not the first device to be registered with this StorSimple Manager service, you’ll need the Service Data Encryption Key as well to be able to register the device
  • You must have 3 green check marks at the Network, Device, and Time settings to be able to register the device under Cloud setting


6. Complete OVA configuration in Azure

In the classic portal (24 October 2016), click on your StorSimple Manager Service/Devices link and you should see the newly registered OVA


Click on that and click Complete Device Configuration


In the next screen, select a Storage Account. I recommend checking the box to Enable Cloud Storage Encryption, and entering a 32 character seed for at-rest encryption of data blocks that the device sends to the Azure Storage Account:


StorSimple Manager Services completes the following tasks:




Moving your StorSimple 8k device

You may have the situation where you need to move your StorSimple 8k iSCSI SAN from one physical location to another. Assuming that the move is not so far as to move to another continent or thousands of miles away, the following process is what I recommend for the move:

  • On the file servers that receive iSCSI volumes from this StorSimple device, open Disk Management, and offline all volumes from this StorSimple device
  • (Optional) In the classic portal, under the device/maintenance page, install the latest Software and Firmware update. The reason this unrelated step is here, is to take advantage of the down time window to perform device update. This may take 1-12 hours, and may require access to the device serial interface.
  • Ensure that you have the Device Administrator password. You’ll need that to change the device IP configuration for the new site. If you don’t have it, you can reset it by going into the classic portal, under the device/configuration page.
  • Power down the device by going to the classic portal, under device/maintenance, click Manage Controllers at the bottom, and shutdown Controller0, and repeat to shutdown Controller1storsimple-shutdown
  • After the device is powered down, toggle the power buttons on the back on the PCM’s to the off position. Do the same for the EBOD enclosure if this is an 8600 model device.
  • Move the device to the new location
  • Rack, cable, and power on the device by toggling the power buttons on the back of the PCM modules.
  • In the serial console,
    • Type 1 to login with full access, enter the device Administrator password.
    • Type in Invoke-HCSSetupWizard, enter the new information for data0 interface: IP, mask, gateway, DNS server, NTP server, Proxy information if that’s needed for Internet access in the new site (Proxy URL as, authentication is typically T for NTLM, Proxy username and password if needed by your Proxy – Proxy must be v1.1 compliant)
  • Back in the classic portal, you should see your device back online, go to the device/configuration page, update any settings as needed such as controller0 and controller1 fixed IPs, and iSCSI interface configuration if that has changed.
  • If the same file servers have moved with the StorSimple device,
    • Bring online the file servers, change IP configuration as needed
    • Verify iSCSI connectivity to the StorSimple device
    • Verify iSCSI initiator configuration
    • Online the iSCSI volumes
    • Test file access

StorSimple 8k series software version reference

This post lists StorSimple software versions, their release dates, and major new features for reference. Microsoft does not publish release dates for StorSimple updates. The release dates below are from published documentation and/or first hand experience. They may be off by up to 15 days.

  • Version 4.0 (17820) – released 12 February 2017 – see release notes, and this post.
    • Major new features: Invoke-HCSDiagnostics new cmdlet, and heatmap based restores
  • Version 3.0 (17759) – released 6 September 2016 – see release notes, and this post.
    • Major new features: The use of a StorSimple as a backup target (9/9/2016 it’s unclear what that means)
  • Version 2.2 (17708) – see release notes
  • Version 2.1 (17705) – see release notes
  • Version 2.0 (17673) – released January 2016 – see release notes, this post, and this post
    • Major new features: Locally pinned volumes, new virtual device 8020 (64TB SSD), ‘proactive support’, OVA (preview)
  • Version 1.2 (17584) – released November 2015 – see release notesthis post, and this post
    • Major new features: (Azure-side) Migration from legacy 5k/7k devices to 8k devices, support for Azure US GOV, support for cloud storage from other public clouds as AWS/HP/OpenStack, update to latest API (this should allow us to manage the device in the new portal, yet this has not happened as of 9/9/2016)
  • Version 1.1 (17521) – released October 2015 – see release notes
  • Version 1.0 (17491) – released 15 September 2015 – see release notes and this post
  • Version 0.3 (remains 17361) – released February 2015 – see release notes
  • Version 0.2 (17361) – released January 2015 – see release notes and this post
  • Version 0.1 (17312) – released October 2014 – see release notes
  • Version GA (General Availability – 0.0 – Kernel 6.3.9600.17215) – released July 2014 – see release notes – This is the first Windows OS based StorSimple software after Microsoft’s acquisition of StorSimple company.
  • As Microsoft acquired StorSimple company, StorSimple 5k/7k series ran Linux OS based StorSimple software version – August 2012

StorSimple Software update 3.0 (17759)

This post describes one experience of updating StorSimple 8100 series device from version 0.2 (17361) to current  (8 September 2016) version 3.0 (17759). It’s worth noting that:

  • StorSimple 8k series devices that shipped in mid 2015 came with software version 0.2
  • Typically, the device checks periodically for updates and when updates are found a note similar to this image is shown in the device/maintenance page: storsimple3-03
  • The device admin then picks the time when to deploy the updates, by clicking INSTALL UPDATES link. This kicks off an update job, which may take several hoursstorsimple3-01
  • This update method is known as updating StorSimple device using the classic Azure portal, as opposed to updating the StorSimple device using the serial interface by deploying the update as a hotfix.
  • Released updates may not show up, in spite of scanning for updates manually several times: storsimple3-04
    The image above was taken on 9 September 2016 (update 3.0 is the latest at this time). It shows that no updates are available even after scanning for updates several times. The reason is that Microsoft deploys updates in a ‘phased rollout’, so they’re not available in all regions at all times.
  • Updates are cumulative. This means for a device running version 0.2 for example, we upgrade directly to 3.0 without the need to manually upgdate to any intermediary version first.
  • An update may include one or both of the following 2 types:
    • Software updates: This is an update of the core 2012 R2 server OS that’s running on the device. Microsoft identifies this type as a non intrusive update. It can be deployed while the device is in production, and should not affect mounted iSCSI volumes. Under the covers, the device controller0 and controller1 are 2 nodes in a traditional Microsoft failover cluster. The device uses the traditional Cluster Aware Update to update the 2 controllers. It updates and reboots the passive controller first, fails over the device (iSCSI target and other clustered roles) from one controller to the other, then updates and reboots the second controller. Again this should be a no-down-time process.
    • Maintenance mode updates:

      These are updates to shared components in the device that require down time. Typically we see LSI SAS controller updates and disk firmware updates in this category. Maintenance mode updates must be done from the serial interface console (not Azure web interface or PowerShell interface). The typical down time for a maintenance mode update is about 30 minutes, although I would schedule a 2 hour window to be safe. The maintenance mode update steps are:

      • On the file servers, offline all iSCSI volumes provisioned from this device.
      • Log in to the device serial interface with full access
      • Put the device in Maintenance mode: Enter-HcsMaintenanceMode, wait for the device to reboot
      • Identify available updates: Get-HcsUpdateAvailability, this should show available Maintenance mode updates (TRUE)
      • Start the update: Start-HcsUpdate
      • Monitor the update: Get-HcsUpdateStatus
      • When finished, exit maintenance mode: Exit-HcsMaintenanceMode, and wait for the device to reboot.



Powershell script to list StorSimple network interface information including MAC addresses

In many cases we can obtain the IP address of a network interface via one command but get the MAC address from another command. StorSimple 8k series which runs a core version of server 2012 R2 (as of 20 June 2016) is no exception. In this case we can get the IP address information of the device network interfaces via the Get-HCSNetInterface cmdlet. However, to identify MAC addresses we need to use the Get-NetAdapter cmdlet. This Powershell script merges the information from both cmdlets presenting a PS Object collection, each of which has the following properties:

  • InterfaceName
  • IPv4Address
  • IPv4Netmask
  • IPv4Gateway
  • MACAddress
  • IsEnabled
  • IsCloudEnabled
  • IsiSCSIEnabled

Script output may look like:


For more information about connecting to StorSimple via PowerShell see this post.



Presenting StorSimple iSCSI volumes to a failover cluster

In a typical implementation, StorSimple iSCSI volumes (LUNs) are presented to a file server, which in turn presents SMB shares to clients. Although the StorSimple iSCSI SAN features redundant hardware, and is implemented using redundant networking paths on both the Internet facing side and the iSCSI side, the file server in this example constitutes a single point of failure. One solution here is to present the iSCSI volume to all nodes in a failover cluster. This post will go over the steps to present a StorSimple iSCSI volumes to a failover cluster as opposed to a single file server.

Create volume container, volume, unmask to all cluster nodes

As usual, keep one volume per volume container to be able to restore one volume at a time. Give the volume a name, size, type: tiered. Finally unmask it to all nodes in the failover cluster:


Format the volume:

In Failover Cluster Manager identify the owner node of the ‘File Server for general use‘ role:


In Disk Management of the owner node identified above, you should see the new LUN:


Right click on Disk20 in the example above, click Online. Right click again and click Initialize Disk. Choose GPT partition. It’s recommended to use GPT partition instead of MBR partition for several reasons such as maximum volume size limitation.

Right click on the available space to the right of Disk20 and create Simple Volume. It’s recommended to use Basic Disks and Simple Volumes with StorSimple volumes.

Format with NTFS, 64 KB allocation unit size, use the same volume label as the volume name used in StorSimple Azure Management Interface, and Quick format. Microsoft recommends NTFS as the file system to use with StorSimple volumes. 64KB allocation units provide better optimization as the device internal deduplication and compression algorithms use 64KB blocks for tiered volumes. Using the same volume label is important since currently (1 June 2016) StorSimple does not provide a LUN ID that can be used to correlate a LUN created on StorSimple to one appearing on a host. Quick formatting is important since these are thin provisioned volumes.

For existing volumes, Windows GUI does not provide a way of identifying the volume allocation unit size. However, we can look it up via Powershell as in:

@('c:','d:','y:') | % {
 $Query = "SELECT BlockSize FROM Win32_Volume WHERE DriveLetter='$_'"
 $BlockSize = (Get-WmiObject -Query $Query).BlockSize/1KB
 Write-Host "Allocation unit size on Drive $_ is $BlockSize KB" -Fore Green

Replace the drive letters in line 1 with the ones you wish to lookup.

Summary of volume creation steps/best practices:

  • GPT partition
  • Basic Disk (not dynamic)
  • Simple volume (not striped, mirrored, …)
  • NTFS file system
  • 64 KB allocation unit (not the default 4 KB)
  • Same volume label as the one in StorSimple
  • Quick Format

Add the disk to the cluster:

Back in Failover Cluster Manager, under Storage, right click on Disks, and click Add Disk


Pick Disk20 in this example


Right click on the new cluster disk, and select Properties


Change the default name ‘Cluster Disk 3’ to the same volume label and name used in StorSimple


Assign Cluster Disk to File Server Role

In Failover Cluster Manager, under Storage/Disks, right click on TestSales-Vol in this example, and select Assign to Another Role under More Actions


Select the File Server for General Use role – we happen to have one role in this cluster:


Create clustered file shares

As an example, I created 2 folders in the TestSales-Vol volume:


In Failover Cluster Manager, under Roles, right click on the File Server for General Use role, and select Add File Share


Select SMB Quick in the New Share Wizard


Click Type a custom path and type in or Browse to the folder on the new volume to be shared


Change the share name or accept the default (folder name). In this example, I added a dollar sign $ to make this a hidden share


It’s important to NOT Allow caching of share for StorSimple volumes. Access based enumeration is my personal recommendation


Finally adjust NTFS permissions as needed or accept the defaults:


Click Create to continue


Repeat the above steps for the TestSales2 folder/share in this example