CloudBerry Drive Server for Windows Server is a tool by CloudBerry that makes cloud storage available on a server as a drive letter. I have examined 10 different tools to perform this task, and CloudBerry drive provided the most functionality. The use case I was after is the ability to upload large files from on-prem servers to Azure VMs. Specifically, I’m testing Veeam Cloud Connect with Azure, which allows for off-site backup to Azure. The backup files are multi-TB each.
However, digging deeper into how CloudBerry drive works showed that CloudBerry Drive caches each received file to a local folder on the VM. According to CloudBerry support this is a must and cannot be turned off. This poses several problems:
- It defeats the purpose of using CloudBerry in the first place. An Azure VM (as of 10/2/2014) can have a maximum of 16 TB of local storage which is implemented as 16x 1TB VHD files (page blobs). The point of using CloudBerry Drive is to be able to access Azure block blob storage with has a 500 TB maximum per storage account.
- It puts a file size limit equivalent to the maximum amount of space on the local drive used for CloudBerry caching.
- CloudBerry Drive then takes the uploaded file from the cache folder and copies it to the Azure block blob storage account.
- This makes the destination file in Azure block blob storage locked and unavailable for many hours during that 2nd copy process. For example, if the Veeam cloud backup job successfully backed up 10 out of 12 VMs, and we retry the remaining 2 VMs, the job will fail since the destination file in Azure is locked by CloudBerry
- The 2nd copy uses a great amount of read IOPS from the local drive (Page Blobs), and write IOPS to the destination Block Blob storage. Which makes any other task on the VM like another backup job not practically possible even if it is a different backup job is using other unlocked files, because CloudBerry is using up all available IOPS on the VM for hours or even days
- The copy incurs transnational, IOPs, and bandwidth charges on an Azure VM unnecessarily
- There are better ways to copy data within the same Azure Storage account that are much more efficient and much less costly, such as instantaneous shadow copies..
CloudBerry Drive Server for Windows Server caches files locally which makes it not suitable for use on Azure VMs.
There’s a number of ways to make Azure storage available to a VM in Azure:
- Attach a number of local VHD disks. There’s a couple of issues with this approach:
- The maximum we can use is 16TB, and
- We’ll have to use an expensive A4 sized VM, that has unneeded RAM and CPU cores.
- Map drives to a number of Azure File SMB shares. There’s a couple of issues with this approach:
- The shares are not persistent although we can use CMDKEY tool as a workaround.
- There’s a maximum of 5TB capacity per share, and a maximum of 1TB capacity per file.
- Use a 3rd party tool such as Cloudberry Drive to make Azure block blob storage available to the Azure VM. This approach has the 500TB Storage account limit which is adequate for use with Veeam Cloud Connect. Microsoft suggests that the maximum NTFS volume size is between 16TB and 256TB on Server 2012 R2 depending on allocation unit size. Using this tool we get 128TB disk suggesting an allocation unit size of 32KB.
To install CloudBerry Drive on an Azure VM:
– Install C++ 2010 x64 Redistributable pre-requisite:
– Run CloudBerryDriveSetup, accept the defaults, and reboot.
– In the Azure Management Portal, obtain your storage account access key (either one is fine):
– Back in the Azure VM, right-click on the Cloudberry icon in the system tray and select Options:
– Under the Storage Accounts tab, click Add, pick Azure Blob as your Storage Provider, enter your Azure Storage account name and key:
– Under the Mapped Drives tab, click Add, type-in a volume label, click the button next to Path, and pick a Container. This is the container we created in step 3 above:
– You can see the available volumes in Windows explorer or by running this command in Powershell:
Get-Volume | FT -AutoSize
Add VHD disks to the VM for the CloudBerry Drive cache:
We’ll add VHD disks to the VM for that cache folder to have sufficient disk space and IOPS for the cache.
Highlight the Azure VM, click Attach at the bottom, and click Attach empty disk. Enter a name for the disk VHD file, and a size. The maximum size allowed is 1023 GB (as of September 2014). Repeat this process to add as many disks as allowed by your VM size. For example, an A1 VM can have a maximum of 2 disks, A2 max is 4, A3 max is 8, and A4 max is 16 disks.
In the Azure VM, I created a 2TB disk using Storage Spaces on the VM as shown:
This is setup as a simple disk for maximum disk space and IOPS, but it can be setup as mirrored disks as well.
Create a folder for the CloudBerry Drive cache on the new disk, and configure CloudBerry Drive to use it:
It’s important to have enough disk space on the drive where CloudBerry Caching occurs. The amount of available space on the Caching drive puts a limit on the file size that can be handled through CloudBerry drive which could be much less than the 128TB available space on a CloudBerry Drive that has an Azure Block Blob back end.