Presenting StorSimple iSCSI volumes to a failover cluster

In a typical implementation, StorSimple iSCSI volumes (LUNs) are presented to a file server, which in turn presents SMB shares to clients. Although the StorSimple iSCSI SAN features redundant hardware, and is implemented using redundant networking paths on both the Internet facing side and the iSCSI side, the file server in this example constitutes a single point of failure. One solution here is to present the iSCSI volume to all nodes in a failover cluster. This post will go over the steps to present a StorSimple iSCSI volumes to a failover cluster as opposed to a single file server.

Create volume container, volume, unmask to all cluster nodes

As usual, keep one volume per volume container to be able to restore one volume at a time. Give the volume a name, size, type: tiered. Finally unmask it to all nodes in the failover cluster:


Format the volume:

In Failover Cluster Manager identify the owner node of the ‘File Server for general use‘ role:


In Disk Management of the owner node identified above, you should see the new LUN:


Right click on Disk20 in the example above, click Online. Right click again and click Initialize Disk. Choose GPT partition. It’s recommended to use GPT partition instead of MBR partition for several reasons such as maximum volume size limitation.

Right click on the available space to the right of Disk20 and create Simple Volume. It’s recommended to use Basic Disks and Simple Volumes with StorSimple volumes.

Format with NTFS, 64 KB allocation unit size, use the same volume label as the volume name used in StorSimple Azure Management Interface, and Quick format. Microsoft recommends NTFS as the file system to use with StorSimple volumes. 64KB allocation units provide better optimization as the device internal deduplication and compression algorithms use 64KB blocks for tiered volumes. Using the same volume label is important since currently (1 June 2016) StorSimple does not provide a LUN ID that can be used to correlate a LUN created on StorSimple to one appearing on a host. Quick formatting is important since these are thin provisioned volumes.

For existing volumes, Windows GUI does not provide a way of identifying the volume allocation unit size. However, we can look it up via Powershell as in:

@('c:','d:','y:') | % {
 $Query = "SELECT BlockSize FROM Win32_Volume WHERE DriveLetter='$_'"
 $BlockSize = (Get-WmiObject -Query $Query).BlockSize/1KB
 Write-Host "Allocation unit size on Drive $_ is $BlockSize KB" -Fore Green

Replace the drive letters in line 1 with the ones you wish to lookup.

Summary of volume creation steps/best practices:

  • GPT partition
  • Basic Disk (not dynamic)
  • Simple volume (not striped, mirrored, …)
  • NTFS file system
  • 64 KB allocation unit (not the default 4 KB)
  • Same volume label as the one in StorSimple
  • Quick Format

Add the disk to the cluster:

Back in Failover Cluster Manager, under Storage, right click on Disks, and click Add Disk


Pick Disk20 in this example


Right click on the new cluster disk, and select Properties


Change the default name ‘Cluster Disk 3’ to the same volume label and name used in StorSimple


Assign Cluster Disk to File Server Role

In Failover Cluster Manager, under Storage/Disks, right click on TestSales-Vol in this example, and select Assign to Another Role under More Actions


Select the File Server for General Use role – we happen to have one role in this cluster:


Create clustered file shares

As an example, I created 2 folders in the TestSales-Vol volume:


In Failover Cluster Manager, under Roles, right click on the File Server for General Use role, and select Add File Share


Select SMB Quick in the New Share Wizard


Click Type a custom path and type in or Browse to the folder on the new volume to be shared


Change the share name or accept the default (folder name). In this example, I added a dollar sign $ to make this a hidden share


It’s important to NOT Allow caching of share for StorSimple volumes. Access based enumeration is my personal recommendation


Finally adjust NTFS permissions as needed or accept the defaults:


Click Create to continue


Repeat the above steps for the TestSales2 folder/share in this example



4 responses

  1. David Thorne

    Good post, is it possible to do this with 2 storSimple devices. i.e have one file server storsimple in one location and another file server storsimple in different location and setup the cluster like that with the data replicating?

    September 1, 2016 at 4:28 am

    • Andrew

      I’d also like to know if two Storsimple appliances could be used which are both available (with some kind of failover between them?) mainly so our poor DC site link is not oversaturated if one file server goes offline. All I read about is the options are manual failover, either to cloud based virtual appliances or other spare capacity on site appliances, both based off the last snapshot taken. Hmm…

      Good post though for setting it up across multiple servers.

      June 20, 2017 at 9:25 am

  2. Marc Gijsman

    Hi SAM,

    Does this solution provide instant failover? Or does failover take time multiple seconds.
    If so is there a solution for high available file servers with Windows 2012R2 and StorSimple?


    November 15, 2016 at 8:49 am

    • Marc,
      Yes, failover is ‘instant’ and automatic. ‘Instant’ here could be anywhere from milliseconds to several seconds. File reads and writes in progress will be momentarily paused then resumed, but will not be interrupted. End users will not notice a difference. Failover Clustering is a solution for high availability. Traditional file server clustered role is what’s recommended here – to be used with StorSimple volumes.

      November 19, 2016 at 4:41 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s