Archive for February, 2016

StorSimple 8k update to version 2.0 (17673)

StorSimple update 2.0 brings in a number of new exciting features such as Locally Pinned Volumes, OVA (On-premise Virtual Array), and enhanced SVA (StorSimple Virtual Array) model 8020 with 64TB capacity as opposed to 30 TB capacity of the prior model 1100 (now renamed 8010).

Update 2.0 is another intrusive update that requires down time. It includes LSI firmware update (KB 3121900), and SSD disk firmware update (KB 3121899).

Prior to the update, we can see the device running Software version 1.2 (17584)


This can also be seen from the serial or Powershell interfaces by using the Get-HcsSystem cmdlet:


Ensure that both controllers have routable IPs

As suggested by the update instructions, we ensure that both controllers 0 and 1 have routable IPs prior to start. To do so, I ping some external Internet IP address such as from each of the controllers’ fixed IPs:

From Controller 0 (the prompt must say ‘Controller0>’):

Test-HcsConnection -Source -Destination

A positive response looks like:


From Controller 1 (the prompt must say ‘Controller1>’):

Test-HcsConnection -Source -Destination


Phase I – Software update – start the update from the Azure Management Interface

In the classic portal, under the device Maintenance page, click Install Updates at the bottom:


check the box and the check mark:


Pre-upgrade checks are started:


And a Software Update Job is created:




Unlike prior updates, the 2.0 update starts on the passive controller:


Under the StorSimple Manager/Jobs page, we can see an update job in progress:


The controller being updated will reboot several times. During the update we’ll see unusual controller health and state information in the portal:


This is normal while the update is in progress.

A few hours later, we can see that the passive controller has been patched to version 2.0


and that a controller failover has occurred, where controller 1 is now active, and controller 0 (now passive) is being patched:


About 4.5 hours the first phase of the update is finished:


We can see the device in normal state and health under the Maintenance page:


Phase II – Maintenance Mode LSI firmware update

Unfortunately this is an intrusive update that requires down time, similar to phase 2 of StorSimple version 1.2 update posted here.

To summarize the steps of maintenance mode updates:

  • Schedule a down-time window
  • Offline all StorSimple iSCSI volumes on the file servers
  • Run a manual cloud snapshot of all volumes
  • On the Device serial (not Powershell) interface, put the device in Maintenance mode:
    Both controllers will reboot
  • Patch controller 0:
    Check update progress:
  • After controller o is patched repeat last step on controller 1 to patch it
  • Finally exit Maintenance mode:
    Both controllers will reboot

The device is now back in normal operating condition, and we can online the volumes back on the file servers.

Setting up Azure AD Connect, 2-way directory synchronization, password write-back, online-password reset

For this demo, I will create a new Azure Active Directory (AAD) called Vertitech3AAD and a new on-premise Active Directory called Vertitech3OP.local (NetBIOS name Vertitech3OP) in a new 2012 R2 AD forest.

Create a new Azure Active Directory:

As of 24 February 2016, creating a directory is available only in the classic portal ( If you try to do it in the new portal (


You’ll simply be redirected to the classic portal:

I created Vertitech3AAD Azure Active directory in Azure, and created an on-premise AD domain called Vertitech3OP.local in new 2012 R2 forest:


I can see the new AAD (Azure Active Directory) domain:


Create new AAD Global Admin user:

We create a new AAD user for AD Connect because we need a Global Admin that has rights to a single AAD. In the new AAD I create a new user with Global Admin permissions:


The new AAD user is created:


Change the temporary user password:

Next, I must change the new user password. I browse to, log off and login again using the new user credentials and temp password:


I’m then prompted to change my password:


Download and install AD Connect on an on-premise machine:

AD Connect can be downloaded from the Azure AD page or this link.


Install AD Connect



Using Express Settings:


Enter the AAD Global Admin user name and password:


And local (on-premise) AD credentials – this account needs to be member of the Enterprise Admins group:


The message/recommendation about custom domain verification can be safely ignored.


AD Connect uses SQL Express – but can be configured to use other on-premise full deployment of SQL:


And we’re all done:


I recorded the machine services in an XML file before installing AD Connect using the Powershell command:

Get-Service | Export-Clixml .\Services1.xml

After installing AD Connect, I ran this small script to identify new services added:

$Services1 = Import-Clixml .\Services1.xml
$NewServices = @()
(Import-Clixml .\Services2.xml) | % {
     if ($_.Name -notin $Services1.Name) {
         $NewServices += $_
$NewServices | sort name | select name, displayname,status | FT -a

We can see 5 new services. Some are running under LocalSystem.


In Computer Management under Local Users and Groups, we can see a number of new local groups that have been created during AD Connect installation:


Only ADSyncAdmins local group has users. It has the local user account (service account for ADSync service) and the domain account that the AD Connect installation ran under,

And in Azure we can see a new Synchronization service account:


Also, note that Directory integration is now Activated:


To view synchronization activity, run Synchronization Service Manager (c:\program files\Microsoft Azure AD Sync\UIShell\miisclient.exe)


User objects in the on-premise AD need to have inheritance enabled for AD Connect to work and synchronize these objects to Azure AD.

Enable Password Write-back:

We can also see Azure AD Connect icon on the desktop (shortcut to “C:\Program Files\Microsoft Azure Active Directory Connect\AzureADConnect.exe”)


Which shows the following options:


First option is to View Current Configuration:


Note the default settings above. To change synchronization settings, click Customize Synchronization Options:


Next we enter our Azure AD Global Admin user credentials:


And our local (on-premise) AD admin credentials:


We can select to synchronize all domains and OUs or specific domains and OUs:


As well as optional features:


I check the box to enable Password Write-back, and click Install to reconfigure the synchronization process:



To test synchronization, I create a local AD user:


By default AD Connect synchronizes every 30 minutes. To force a manual synchronization I use this Powershell cmdlet on the AD Connect machine:

Start-ADSyncSyncCycle -PolicyType Delta

Now I can see the user in Azure:


Configure Password Reset Policy:

In the Azure classic portal at, browse to your directory/configure page:


Click the Yes button for ‘Users Enabled for Password Reset’.

Don’t forget to click the Save icon on the bottom center to save and apply your new settings.

Accept the remaining default settings or customize them as needed under the ‘user password reset policy’ section.

I changed the default setting ‘Require Users to Register When Signing in’ from Yes to No. This feature will require users to enter Mobile Phone OR Alternate Email Address as configured in this section. You may want to warn users before hand to expect that requirement, or/and tackle any internal organization/privacy issues related to users’ alternate emails and mobile phone numbers.

One last note here; Password Reset Policy is a directory-wide setting. It will apply to all users. As of 7 March 2016, it cannot be configured to apply to a certain user/group/OU.

Finally, users can change their passwords online using the standard Azure password reset pages/links such as


which can be reached from the settings/password link under for example:



Setting up certificate based communication between SQL server endpoints for SQL mirroring

SQL mirroring is a popular DR option for SQL databases. It provides a warm standby server where a database can be recovered quickly. Although SQL mirroring is being deprecated by Microsoft in SQL 2016 in lieu of Availability Groups, they share several elements of the underlying technologies.

SQL mirroring popularity is often attributed to

  • It does not require shared storage like clustering
  • It does not require a common file share like log-shipping
  • It can be configured for automatic failover (synchronous mode only) with the configuration of a SQL Witness (3rd server). This option requires that the application/client be mirror-aware (include ‘failover partner=xxxx’ in connection string)
  • It can be configured in safety/synchronous mode (default), where a transaction is written to both servers before a commit is returned to the client. This requires low latency between the 2 servers.
  • It can be configured in performance/asynchronous mode, where the primary server sends commit back to client as soon as the transaction is written to the send queue. This may lose data if the primary fails before the transaction makes it to the secondary server redo queue.
  • It can be configured across distant geographical locations (recommend asynchronous mode and certificate based authentication in this scenario)
  • It can be configured between 2 servers that belong to different AD domains using certificate authentication.

This Powershell script automates the last scenario of setting up certificate based communication for a pair SQL 2014 servers in Azure. The script can be slightly modified to work on on-premises SQL servers as well. This script has been tested on SQL 2014 SP1 but should work on SQL 2008 R2 and above.

Region ‘Initialize’ output may look like:


Region ‘configure outbound connections’ output may look like:


Region ‘configure inbound connections’ output may look like:


Finally, region ‘Final Cleanup’ deletes the certificate files and closes open sessions to the 2 SQL servers:


To undo this setup, you can run this TSQL script on the Principal server:

DROP user Vertitech1SQL2_user
DROP login Vertitech1SQL2_login
DROP endpoint Endpoint_mirroring

SELECT * FROM sys.sysusers where name = ‘Vertitech1SQL2_user’
SELECT * FROM sys.server_principals where name = ‘Vertitech1SQL2_login’
SELECT * FROM sys.certificates
SELECT name,port FROM sys.tcp_endpoints

and this TSQL script on the Secondary server:

DROP user Vertitech1SQL1_user
DROP login Vertitech1SQL1_login
DROP endpoint Endpoint_mirroring

SELECT * FROM sys.sysusers where name = ‘Vertitech1SQL1_user’
SELECT * FROM sys.server_principals where name = ‘Vertitech1SQL1_login’
SELECT * FROM sys.certificates
SELECT name,port FROM sys.tcp_endpoints

Managing Azure VMs using Powershell from your local desktop

To issue Powershell commands from a local (on-premises) workstation, and have them execute on remote Azure virtual machines, requires certificate based authentication in most cases since local machine and Azure VM often don’t belong to the same Active Directory domain. In Azure Microsoft has a large list of VM templates that can be used in the Gallery to provision VMs. These VMs come with few pre-configured features to facilitate secure powershell remoting into the VMs:

  • WinRM is enabled and configured to listen on HTTPS port 5986
  • A certificate is already created to enable authentication from remote on-premises computers.

This PS script takes advantage of these settings and establishes PS session with Azure VM. Once the session is established, you can issue remote PS commands as shown in the examples.


This script is also available as a function. The function facilitate re-using the code to connect to Azure VMs from other scripts.

Here’s an example of using this function in a larger script:


This script which sets up certificate based SQL mirroring on 2 SQL servers in Azure, (explained in this post) provides an example of using this function.