Creating and using an Azure StorSimple 1100 Virtual device (SVA)


storsimple8kStorSimple 8000 series comes with new exiting features. One of them is the ability to have a virtual device. Microsoft offers the StorSimple 1100 virtual appliance as an Azure VM that can connect to a StorSimple 8000 series array’s Azure Storage account and make available its data in case of a physical array failure. There’s no cost for the StorSimple 1100 virtual device, but the VM compute and other IAAS costs are billed separately. So, the recommendation is to keep the StorSimple 1100 Azure VM powered up only when needed.

To setup a StorSimple 1100 virtual appliance:

1. In the Azure Management portal, click StorSimple on the left, then click on your StorSimple Manager service, click the Devices link on top, and click Create Virtual Device at the bottom:

8k27

Enter a name for the new StorSimple 1100 virtual device. Pick a Virtual Network and Subnet. Pick a Storage account for the new virtual device, or create a new one. Check the “I understand that Microsoft can access the data stored on my virtual device” box:

8k28

Notes:

  • A Virtual Network and Subnet must be setup prior to starting to setup a StorSimple 1100 virtual appliance.
  • It’s recommended to use a separate Storage account for your StorSimple 1100 virtual appliance. If you use the same Storage account as your StorSimple 8000 series array, the data stored there will be subtracted from your array’s Storage Entitlement (more costly than a separate Storage account)
  • The “I understand that Microsoft can access the data stored on my virtual device” checkbox is a reminder that data is encrypted at rest only. That is when it resides in the Storage account associated with your StorSimple 8000 series array. Once this data is accessed and used by a StorSimple 1100 VM, it’s not encrypted. Stored data is encrypted, compute data is not. In other words, if data in your Storage account is accessed by unauthorized individuals, it’s still safe due to EAS-256 bit at-rest encryption. However, if data in your StorSimple 1100 VM is accessed by unauthorized individuals, it’s no longer safe since compute data cannot be encrypted.
  • This can take up to 15 minutes

2. Back in the Devices page, we now have the newly provisioned StorSimple 1100 device:

8k29

Note: StorSimple 1100 virtual device has a maximum capacity limit of 30TB.

Click on it, then click on the Complete Device Setup, or click the Configure link on top:

8k30

Enter the Service Data Encryption Key for the StorSimple 8000 series obtained during setup:

8k31

Also enter the device Administrator and Snapshot Manager passwords.

3. Similar to a StorSimple 8000 series, setup Volume containers, volumes, Access Control Records


To use a StorSimple 1100 virtual appliance in case of a StorSimple 8000 device failure:

1. In the Azure Manage portal, click StorSimple on the left, then click on your StorSimple Manager service, click on your StorSimple 8000 device, and click Failover at the bottom:

8k32

2. Select one or more Volume Containers:

8k33

Note: It’s recommended to keep total volumes’ capacity in any given Volume Container under 30TB since a StorSimple 1100 virtual device has a 30TB maximum capacity.

3. Select the target StorSimple 1100 virtual device to fail to:

8k34

Check the summary screen, and check the box at the bottom:

8k35

This has created a failover job. Jobs can be viewed in the StorSimple Manager service page under the Jobs link on top.

8k36

Once this failover job is complete, we can see the Volume Container that used to be on the StorSimple 8000 series array, now on the SVA:

8k37

So far we’ve made available the StorSimple 8000 series array’s volumes on the SVA we just spun up. The volumes in this Volume Container on the SVA will have the same ACRs as the original volumes on the StorSimple 8000 series array, which will make the volumes available to the original on-prem servers.

This is how the topology looks like before the failover:

8k39

and here how it looks like after the failover:

8k40

Considerations and limitation:

  • In this scenario of failover, we have to mount the new volumes presented from the SVA onto the file server, and recreate the SMB file shares.
  • SVA has a maximum capacity of 30TB. So, volumes larger than 30TB cannot be recovered without a replacement physical StorSimple 8000 series array.
  • More than one SVA may be needed to provide enough storage capacity for all volumes that need to be recovered.
  • SVA has one vNIC. MPIO is not supported.
  • SVA does not have controller redundancy. So maintenance that requires SVA reboot or down time will cause volumes to be unavailable.
  • Data on the SVA is accessible to Microsoft. If the SVA is compromised data is unprotected.
  • Volume access speed will be slower than usual due to WAN link latency

Microsoft suggests taking it a step further and creating new Azure VMs as file servers:

8k41

This recovery scenario suffers from all the above limitations in addition to:

  • Need to remap drive letters on all client computers for all recovered volumes
  • Need to reconfigure all applications that use UNC paths that point to the original servers to the new VMs
  • Azure virtual file servers will have to join the domain
  • Additional time, costs, and complexity of creating new Azure virtual file servers

Some of these steps may not be needed if a name space is used to access network drives.

Advertisements

4 responses

  1. Pingback: Setting up Azure StorSimple 8000 series array | Sam's Corner

  2. Pingback: StorSimple current 7k versus new 8k series | Sam's Corner

  3. Azerty

    Is the virtual Appliance the way to run a Full Scan (Antivirus) ?.. since we can’t do it locally with the local Appliance ?

    March 22, 2015 at 12:49 pm

    • Not really. Although technically possible, it would a pretty costly scan.
      I recommend you configure your anti-virus software for incremental scan only.
      However, if your primary data set resides entirely on-prem, a full scan would not be a problem.
      For example, if you have an 8100 array (16.2 TB local storage before deduplication/compression) and your primary data set is 10 TB, a full scan would be OK.
      A full scan will be a problem if part of your primary data set has tiered to Azure. This happens when your primary data set size exceed the on-prem capacity of the array.

      March 23, 2015 at 7:39 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s