As part of a migration from on-premise to Azure cloud, we have to decide between (1) physically shipping a disk containing a few terabytes of database backups to Microsoft, and have Microsoft upload the disk's content to Azure blob storage, or (2) upload the contents ourselves. Once in Azure blob storage, the database backups are to be extracted and restored on multiple MS SQL servers to finalize the migration.
As a crude measure of the upload and download capacity of Azure blob storage, we setup a storage account in North Europe with locally redundant replication. Inside the account, we added a default container and fetched the primary access key to include in the connection string below.
Now we're able to upload and download blobs of configurable sizes:
// reference WindowsAzure.Storage NuGet package using System; using Microsoft.WindowsAzure.Storage; using System.Diagnostics; namespace AzureBlobStorageSpeedTest { class Program { const int Megabyte = 1024 * 1024; const int Megabit = 1000 * 1000; static double Time(Action toMeasure) { var w = new Stopwatch(); w.Start(); toMeasure(); w.Stop(); return w.ElapsedMilliseconds / 1000.0; } static void Main(string[] args) { var sizeInMegabytes = int.Parse(args[0]); var contentLength = sizeInMegabytes * Megabyte; var randomizedContent = new byte[contentLength]; new Random().NextBytes(randomizedContent); var account = CloudStorageAccount.Parse( "DefaultEndpointsProtocol=https;" + "AccountName=azurespeedtestpost;" + "AccountKey=OsgbBQ+nkuT0CRjbhyZ0dK3aIvpkDltEdQ3xvuCBsJlZThRFVEIJ6zXz050BENnw/c6aaVeOrCKz/oNM5UYAeQ=="); var client = account.CreateCloudBlobClient(); client.DefaultRequestOptions.ParallelOperationThreadCount = 24; var container = client.GetContainerReference("default"); var blob = container.GetBlockBlobReference("test-" + DateTime.Now); var uploadTime = Time(() => blob.UploadFromByteArray(randomizedContent, 0, contentLength)); var downloadTime = Time(() => blob.DownloadRangeToByteArray(randomizedContent, 0, 0, contentLength)); Console.WriteLine( "{0} megabytes uploaded in {1:0.0} seconds at {2:0.0} megabits/second\r\n" + "{0} megabytes downloaded in {3:0.0} seconds at {4:0.0} megabits/second", sizeInMegabytes, uploadTime, contentLength / uploadTime * 8 / Megabit, downloadTime, contentLength / downloadTime * 8 / Megabit); } } }
Note how the application allocates a byte array of the designated size in megabytes. To avoid out of memory exceptions, make sure to compile the application in 64 bit mode to get around the 2GB address space limitation of 32 bit Windows processes.
Running the speed test application on a Windows 2012 Azure virtual machine, here's an average result:
%> ./AzureBlobStorageSpeedTest 1024 1024 megabytes uploaded in 51.1 seconds at 168.9 megabits/second 1024 megabytes downloaded in 140.8 seconds at 71.5 megabits/second
On a Windows 7 laptop, running outside Azure, but with a gigabit network card and an internet connection capable of 200 megabits/second according to speedtest.net, an average result is as follows:
%> ./AzureBlobStorageSpeedTest 1024 1024 megabytes uploaded in 106.4 seconds at 80.7 megabits/second 1024 megabytes downloaded in 416.7 seconds at 20.6 megabits/second
For a content size of only a couple of terabytes, uploading is sufficiently fast to not warrant the overhead in time and trouble of shipping a disk to Microsoft. Even with a bandwidth of 50 megabits/second, uploading one terabyte takes about two days. And that doesn't factor in uploading blobs in parallel -- Either from computers on disparate networks or multiple computers on the same network. Once in storage, the blobs can be re-assembled and the database backup restored.
Lastly, don't forget that what goes up must come down. Downloading from blob storage happens at a considerably slower pace than uploading, but is subject to the same performance optimizations.
For an in-depth and hands-on series of blog posts on Azure blob storage, have a look at the Just Azure blog. Also, have a look at the free Blob transfer utility.