Cloud Zone is brought to you in partnership with:

After teaching math (and a little computer science) for 14 years, Brian changed careers in 2006 with the idea that getting out of his comfort zone would be good. So now he works as a writer and programmer at Microsoft who specializes in PHP and Windows Azure. Brian is a DZone MVB and is not an employee of DZone and has posted 82 posts at DZone. You can read more from them at their website. View Full User Profile

Improving Performance by Batching Azure Table Storage Inserts

  • submit to reddit

This is a short post to share the results of a little investigation I did that was inspired by comments on a post I wrote about using SQL Azure for handling session data. The comment was by someone reporting that SQL Azure seemed to be faster than Azure Table Storage for handling session data. My experiments show that SQL Azure and Table Storage have very similar performance when doing single writes (YMMV), so I can’t verify or refute the claim. However, I got to wondering which is faster for inserting and retrieving many “rows” of data. I know that Table Storage is supposed to be faster, but I wondered how much faster. So I wrote a two-part PHP script that does the following:

  1. Connects to SQL Azure.
  2. Inserts 100 rows to an existing database.
  3. Retrieves the 100 rows.

Here’s the code:

$conn = sqlsrv_connect(SQLAZURE_SERVER_ID.",1433", array("UID"=>SQLAZURE_USER."@".SQLAZURE_SERVER_ID
                                                                            , "PWD"=>SQLAZURE_PASSWORD
                                                                            , "Database"=>SQLAZURE_DB
                                                                            , "ReturnDatesAsStrings"=>true));    
for($i = 0; $i < 100; $i++)
    $id = $i;
    $data = "GolferMessage".$i;
    $params = array($id, $data);
    $stmt1 = sqlsrv_query($conn, "INSERT INTO Table_1 (id, data) VALUES (?,?)", $params);
    if($stmt1 === false)
$stmt2 = sqlsrv_query($conn, "SELECT id, data, timestamp FROM Table_1");
while($row = sqlsrv_fetch_array($stmt2, SQLSRV_FETCH_ASSOC))

Note: The code above uses the SQL Server Driver for PHP to connect to SQL Azure.

The second part of the script does the equivalent for Table Storage:

  1. Connects to Azure Storage.
  2. Inserts 100 entities to an existing table.
  3. Retrieves the 100 entities.

Here’s the code:

$tableStorageClient = new Microsoft_WindowsAzure_Storage_Table('', STORAGE_ACCOUNT_NAME, STORAGE_ACCOUNT_KEY);
$batch = $tableStorageClient->startBatch();

for($i = 0; $i < 100; $i++)
    $name = $i;
    $message = "GolferMessage".$i;
    $mbEntry = new MessageBoardEntry();
    $mbEntry->golferName = $name;
    $mbEntry->golferMessage = $message;
    $tableStorageClient->insertEntity('MessageBoardEntry', $mbEntry);
$messages = $tableStorageClient->retrieveEntities("MessageBoardEntry", null, "MessageBoardEntry");
foreach($messages as $message)

Note: The code above uses the Windows Azure SDK for PHP to connect to Azure Storage.

The result of the test was that Table Storage was consistently 4 to 5 times faster than SQL Azure (again, YMMV). The key, however, was to use the $tableStorageClient->startBatch() and $batch->commit() methods with Table Storage. Without using batches, Table Storage opens and closes a new HTTP connection for each write, which results in slower performance than SQL Azure (which keeps a connection open for writes). When using batches with Table Storage, the connection is kept open for all writes.

Note: Many thanks to Maarten Balliauw who, when I was perplexed about the results of my tests without batching (I expected Table Storage to be faster, but because I didn’t know about batches for Table Storage, I was not getting the results I expected), suggested I try batching.

The complete script (with set up/tear down of database and Table) is attached in case you want to try for yourself.





Published at DZone with permission of Brian Swan, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)



Yaron Levy replied on Sun, 2012/06/10 - 10:37am

Ok, so now to do a more fair comparison, let's try to apply some batching to SQL Azure as well :))

You could start by wrapping those 100 INSERT operations into a BEGIN / COMMIT TRAN block, or using bulk copy API for example.

Jaffa Wify replied on Mon, 2012/07/02 - 5:04am

Java is a general purpose, concurrent, class based, object oriented language that is specifically designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere, meaning that code that runs on one platform does not need to be recompiled to run on another. Thanks. Regards, edit my dissertation

Jaffa Wify replied on Tue, 2012/07/10 - 5:41am

The major benefit of using bytecode is porting. However, the overhead of interpretation means that interpreted programs almost always run more slowly than programs compiled to native executables would. Just in Time compilers were introduced from an early stage that compile bytecodes to machine code during runtime. Thanks a lot. Regards, Homes for sale Mesa AZ

Ragdu Bagdu replied on Tue, 2012/07/24 - 10:04am in response to: Yaron Levy

Very good blog with many new updates and with new products.Try it out now to have a look.

Rava Nava replied on Wed, 2012/07/25 - 8:17am in response to: Yaron Levy

Click hier for some new information.I m glad to share this information with all of you.

Royal Koyal replied on Fri, 2012/07/27 - 3:58am in response to: Yaron Levy

Obesity is soon becoming an alarming health hazard among the youngsters of the modern generation.Weight Loss is the only way to get rid of it.

Lal Pila replied on Thu, 2013/03/14 - 2:31pm in response to: Yaron Levy

The website contains good post for the programmers and software engineers.{ Librada Camaron  | Gayle Arnau | Nelida Porcello | Zackary Mantz  }

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.