AWS S3 and Blob Storage not much in it

One of the first things I had o do as part of my deployment activities for both AWS and for Windows Azure was to write some  tools written in C#  to allow me to upload  , and download files from their respective storage solutions : S3 for AWs and blob storage for Windows Azure.   Both solutions are similar in the way you interact with them and the similarities do not stop there. The table below   makes some comparisons between the two Storage systems essentially ending up showing how similar they are to work with.

AWS S3 Windows Azure Blob Storage
An S3 account is associated with an AWS account but the account name is NOT associated with the namespace of the objects stored on S3 A storage account is a  globally uniquely identified entity within blob storage.  The account is the parent namespace for the Blob service
Objects are placed in containers called buckets 


Objects are placed in containers called containers
An object is a file and optionally any metadata that describes that file An object is represented by a blob.  A blob is made up of  resources that  includes content, properties, and metadata
Interaction with buckets and objects are via the soap and rest API Interaction with containers and blobs are via the rest API
The bucket name you choose must be unique across all existing bucket names in Amazon S3. 

Bucket names must comply with the following requirements:

  • Can contain lowercase letters, numbers, periods (.), underscores (_), and dashes (-)
  • Must start with a number or letter
  • Must be between 3 and 255 characters long
  • Must not be formatted as an IP address (e.g.,

To conform with DNS requirements, AWS recommend following these additional guidelines when creating buckets:

  • Bucket names should not contain underscores (_)
  • Bucket names should be between 3 and 63 characters long
  • Bucket names should not end with a dash
  • Bucket names cannot contain two, adjacent periods
  • Bucket names cannot contain dashes next to periods


The container name must be a valid DNS name, conforming to the following naming rules: 

  • Container names must start with a letter or number, and can contain only letters, numbers, and the dash (-) character.
  • Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
  • All letters in a container name must be lowercase.
  • Container names must be from 3 through 63 characters long.
  • Avoid blob names that end with a dot (.), a forward slash (/), or a sequence or combination of the two.


An object has data, a key, and metadata. When you create an object you specify the key name. This key name uniquely identifies the object in the bucket. The name for a key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long A blob name can contain any combination of characters, but reserved URL characters must be properly escaped. A blob name must be at least one character long and cannot be more than 1,024 characters long.
You cannot nest buckets You cannot nest containers
objects are stored in buckets but you can create  folders within the buckets by using the ‘/’ delimiter as part of the object name The Blob service is based on a flat storage scheme, not a hierarchical scheme. However, you may specify a delimiter such as ‘/’  within a blob name to create a virtual hierarchy
S3 buckets can be created in specific regions Storage accounts can be created in specific regions
Access to objects and buckets is managed via access control lists (ACLs) and bucket policies. You can use them independently or together. Access to blobs and containers is controlled via ACL’s which allow you to grant public access and Shared Access signatures which provide more granular access
To load large objects use Multipart upload which allows you to upload a single object as a set of parts. Multipart upload allows the upload of parts in parallel to improve throughput. Smaller part sizes minimize the impact of restarting a failed upload due to a network error. To upload large blobs use block blobs. Block blobs allows the upload of blobs larger than 64MB. It allows the upload of blocks in parallel. It allows the resumption of failed uploads by retrying only the blocks that weren’t already uploaded. 


The location of your object in Amazon S3 is a URL, generally, of the form: http://%5Bbucket-name][key] For a blob, the base URI includes the name of the account, the name of the container, and the name of the blob:


To access programmatically Use the AWS  SDK To access programmatically use the Azure SDK
To use .NET before interacting with S3  is to provide your AWS credentials e.g : 

public static string accessKeyID;

public static string secretAccessKeyID;

NameValueCollection appConfig = ConfigurationManager.AppSettings;

accessKeyID = appConfig[“AWSAccessKey”];

secretAccessKeyID = appConfig[“AWSSecretKey”];

Targetbucket = appConfig[“TargetBucket”];

// set up connection to AWS S3

using (s3client = Amazon.AWSClientFactory.CreateAmazonS3Client(accessKeyID, secretAccessKeyID))

To use .NET the before interacting with Blob storage is to provide your  Azure Storage credentials e.g: 

String AccountName_var = (String)ConfigurationSettings.AppSettings[“AccountName”];

String AccountSharedKey_var =  (String)ConfigurationSettings.AppSettings[“AccountSharedKey”];

String ContainerName_var =(String)ConfigurationSettings.AppSettings[“ContainerName”];

// Setup the connection to Windows Azure Storage

StorageCredentialsAccountAndKey storageCredentialsAccountAndKey = new StorageCredentialsAccountAndKey(AccountName_var, AccountSharedKey_var);

_BlobClient = new CloudBlobClient(AccessUri, storageCredentialsAccountAndKey);


To use a custom domain requires the use of CNAMES To use a custom domain requires the use of CNAMES
Reduced redundancy option available to reduce costs No equivalent


Update to add some sizing info:

S3  has  a maximum size limit of 5 TB  whereas  Azure blob storage has a maximum size limit for a single file of 1 TB ( Azure Blob storage: Page  blob  max size :1 TB block blob max size : 400 GB) .

There is  strict limit of 100 TB per storage account for Windows Azure.  The following post describes the scalability targets for Windows Azure Storage.

S3 does not appear to have a limit on the total size of objects guess that’s down to how much money you wish to spend but there is a limit of 100 buckets per AWS account.




  1. James Saull · March 16, 2011

    Is there a maximum file size difference? Max files per bucket/container limit? SLA difference?

    • Grace Mollison · March 16, 2011

      AWS have the better SLAs. I’ll add info on bucket/container limit 🙂

  2. Pingback: A comparison of Cloud object stores | Devops and the Public cloud
  3. 21 Day Detox · January 8

    Simply wish to say your article is as surprising. The clarity in your post is just
    nice and i can assume you’re an expert on this subject. Fine with your permission let me to grab
    your feed to keep updated with forthcoming post. Thanks a million and please keep up the
    rewarding work.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s