Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.

Hosting an MVC3 (with membership) application on EC2

February 4, 2012

One of my side projects was to get an MVC3 application that uses the Razor View Engine and Membership hosted on EC2 running Linux. I found some amazingly helpful resources along the way – particularly from Nathan Bridgewater at Integrated Web Systems.

Step one of the project is to get an EC2 instance prepped and ready.  Basically I followed the cookbook instructions on Bridgewater’s site – Get Started with Amazon EC2, Run Your .Net MVC3 (RAZOR) Site in the Clould with Linux Mono.

The exact commands I used:

Create new AMI ID ami-ccf405a5 and associate elastic IP (xx.xx.xx.xx)
sudo apt-get update &;& sudo apt-get dist-upgrade –y
sudo apt-key add directhex.ppa.asc
sudo apt-get install python-software-properties
sudo add-apt-repository 'deb lucid main'
sudo apt-get update
sudo apt-get install mono-apache-server4 mono-devel libapache2-mod-mono
cd /srv
sudo mkdir www; cd www
sudo mkdir default
sudo chown www-data:www-data default
sudo chmod 755 default
cd /etc/apache2/sites-available/
sudo vi mono-default (see mono-default, change IP address)
cd /etc/apache2/sites-enabled
sudo rm 000-default
sudo ln -s /etc/apache2/sites-available/mono-default 000-mono
sudo mv /var/www/index.html /srv/www/default
sudo vi /srv/www/default/index.html
sudo apt-get install apache2
sudo service apache2 restart
Test in a browser via IP address (you should see the default apache page)

My mono default:

# xx.xx.xx.xx is my Elastic IP address
  ServerName xx.xx.xx.xx
  DocumentRoot /srv/www/default
  MonoServerPath xx.xx.xx.xx "/usr/bin/mod-mono-server4"
  MonoDebug xx.xx.xx.xx true
  MonoSetEnv xx.xx.xx.xx MONO_IOMAP=all
  MonoApplications xx.xx.xx.xx "/:/srv/www/default"

    Allow from all
    Order allow,deny
    MonoSetServerAlias xx.xx.xx.xx
    SetHandler mono
    SetOutputFilter DEFLATE
    SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary

    AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript

Step two is to test mono with a simple page.  Put this file into /srv/www/default.  Edit with sudo and view via browser at http://xx.xx.xx.xx/test.aspx.

<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="" xml:lang="en" lang="en">
<title>ASP.Net Test page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
lblTest.Text = "This is a successful test.";
This is a test page</h1>
<asp:Label runat="server" ID="lblTest"></asp:Label>
If problems are encountered check logs in /var/log/apache2/access.log or /var/log/apache2/error.log
Step three is to get MySql installed and tested with this simple application.
sudo apt-get install mysql-server
sudo apt-get install libmysql6.1-cil
CREATE DATABASE sample; USE sample;
INSERT INTO sample.test VALUES (null, 'Lucy');
INSERT INTO sample.test VALUES (null, 'Ivan');
INSERT INTO sample.test VALUES (null, 'Nicole');
INSERT INTO sample.test VALUES (null, 'Ursula');
INSERT INTO sample.test VALUES (null, 'Xavier');
CREATE USER 'testuser'@'localhost' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILEGES ON sample.* TO 'testuser'@'localhost';

Put this file into /srv/www/default. Edit with sudo and view via browser at

<%@ Page Language="C#" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="MySql.Data.MySqlClient" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="" xml:lang="en" lang="en">
<title>ASP and MySQL Test Page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
string connectionString = "Server=;Database=sample;User ID=testuser;Password=somepassword;Pooling=false;";
MySqlConnection dbcon = new MySqlConnection(connectionString);

MySqlDataAdapter adapter = new MySqlDataAdapter("SELECT * FROM test", dbcon);
DataSet ds = new DataSet();
adapter.Fill(ds, "result");

dbcon = null;

SampleControl.DataSource = ds.Tables["result"];
<h1>Testing Sample Database</h1>
<asp:DataGrid runat="server" ID="SampleControl" />

Step four is to get the simplest possible MVC3 Razor application functioning on Ubuntu / EC2.  Again Bridgewater has a more detailed explanation of what to do at his website linked here.

  1. Go into Visual Studio 2010 and create a new project MV3 / Razor project making no changes to the default project template.
  2. Build it and locally.
  3. Ensure that these references are set to “copy local”: System.Web.Mvc, System.Web.Helpers, and System.Web.Routing
  4. Copy System.Web.Razor, System.Web.WebPages, System.Web.WebPages.Razor, System.Web.WebPages.Deployment into your application’s bin directory.  You will find these files in in C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies
  5. Publish the application to a scratch directory
  6. Copy the published application to your EC2 machine.  I used git bash to tar (tarr –zcvf aws.tar.gz *) the files as Bridgewater recommends but could not get scp to work so I ftp’d the file over.
  7. On the EC2 machine cd /srv/www/default; sudo mv /home/ubuntu/aws.tar.gz; sudo tar –zxvf *.gz; sudo chown –R www-data;www-data *; sudo chmod 755 *; sudo service restart apache2 restart
  8. Confirm working from browser by checking default IP address
  9. NB: I had to hit refresh several times before the application would work.

Step five is to use implement membership using MySQL.

  1. On your Windows machine.  Edit the default controller and decorate it with the [Authorize] attribute.
  2. Edit your web.config shown below.  This is where it can get hairy.  If you want to run this locally on Windows you need to install the MySQL connector for .Net and Mono  Make sure that you reference system.web.  On Ubuntu the application uses  The trick is to add them both so you can run the same code on Ubuntu and Windows.  Also notice that I’ve made database password clear text.  As Nathan notes this is not a good practice.
  3. On the Ubuntu machine Go into MySQL and create a database called membership.
  4. Deploy the application to EC2 and test the application using step 4.
<?xml version="1.0"?>

 For more information on how to configure your ASP.NET application, please visit

 <add name="Default"
 connectionString="data source=;user id=aspnet_user;
 providerName="MySql.Data.MySqlClient" />

 <compilation debug="true" targetFramework="4.0">
 <add assembly="System.Web.Abstractions, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Routing, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Mvc, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />

<authentication mode="Forms">
 <forms loginUrl="~/Account/LogOn" path="/" timeout="2880" />

<!--NOTE that "hashed" isn't supported with the public release of MySql.Web 6.3.5 under
 Mono runtime. But I can't bring myself to share sample code that doesn't hash the
 passwords by default. ;) The version included with this sample project is slightly modified to
 allow hashed passwords in Mono. I highly recommend checking out the latest version of
 MySql .NET Connector.

 Also, I found that you have to rebuild MySql.Data and MySql.Web
 using .NET 4.0 profile if you want it to work with Asp.Net 4.0 under Mono. This is a known bug and should
 be published in upcoming versions of the connector. -->
 <membership defaultProvider="MySqlMembershipProvider">
 <add name="MySqlMembershipProvider"
 type="MySql.Web.Security.MySQLMembershipProvider, mysql.web"

<roleManager enabled="true" defaultProvider="MySqlRoleProvider">
 <add connectionStringName="Default"
 type="MySql.Web.Security.MySQLRoleProvider, mysql.web"

 <add type="MySql.Web.Security.MySqlProfileProvider, mysql.web"

 <add namespace="System.Web.Mvc" />
 <add namespace="System.Web.Mvc.Ajax" />
 <add namespace="System.Web.Mvc.Html" />
 <add namespace="System.Web.Routing" />

<!--Don't forget to update this... I left it open to make it easier to debug.-->
 <customErrors mode="Off"/>

 <add name="MySQL Data Provider"
 description="ADO.Net driver for MySQL"
 type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data"/>

 <validation validateIntegratedModeConfiguration="false"/>
 <modules runAllManagedModulesForAllRequests="true"/>

 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
 <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
 <bindingRedirect oldVersion="" newVersion="" />

HTML5 Validation using Yepnope

November 23, 2011

In this months’ MSDN magazine there is an interesting article on Browser and Feature Detection.  What really caught my eye was the piece on Modernizr, the JavaScript library that implements browser feature detection.

Modernizr has built-in detection for most HTML5 and CSS3 features that’s very easy to use in your code. It’s very widely adopted and constantly enhanced. Both Modernizr and jQuery are shipped with the ASP.NET MVC tools.

As HTML5 and CSS3 become more and more prevalent feature detection is increasingly relevant.

A growing number of ready-made “fallbacks” for many HTML5 features, known as shims and polyfills, can ease that burden. These come in the form of CSS and JavaScript libraries or sometimes even as Flash or Silverlight controls that you can use in your project, adding missing HTML5 features to browsers that don’t otherwise support them. The difference between shims and polyfills is that shims only mimic a feature and each has its own proprietary API, while polyfills emulate both the HTML5 feature itself and its exact API. So, generally speaking, using a polyfill saves you the hassle of having to adopt a proprietary API.  The HTML5 Cross Browser Polyfills collection on github contains a growing list of available shims and polyfills.

As .Net developers we are usually insulated from browser incompatibility issues, however, there may be situations where you are not using the .Net framework.  There is a good example of using yepnope (a conditional resource loader) for HTML5 Form Validation at CSSKarma.  I tweaked and re-implemented  the example using a .Net MVC application which you can see below.  A couple of “gotchas”:

· Modernizr is part of the .Net MVC tools that come from Microsoft, however, yepnope is not in the release of the MVC tools that at least I have on my machine.  You will need to download version 2.0 from modernizr.comto get yepnope support.

· The JQuery syntax for working with YepNope takes some getting used to.  Make sure to load JQuery first.

· Visual Studio 2010 does not seem to know about HTML5 and will warn of a validation when you use its attributes. See figure #2.


Figure 1 – MyValidation.js – based on the example at CSSKarma


Figure 2 – HTML5 Source


Figure 3 – Output


Best practices for adding scalability

November 11, 2011

My thesis is that a you can’t have a good SaaS application that doesn’t scale.  By definition the need for scalability is driven by customer demand but there is demand and there is DEMAND. A handful of lucky organizations (Google, Twitter, Facebook) are faced with industrial strength volume every minute of every day. Organizations with this type of DEMAND can afford to have entire divisions dedicated to managing scalability. Most people are dealing with optimizing their resources for linear growth or the happy situation where their application (Instragram) catches fire (in some cases overnight). A scalable architecture makes it possible to expand to cloud services such as EC2 and Azure or even locally hosted capacity. Absent a scalable architecture an organization is faced with curating a collection of tightly coupled servers and overseeing a maintenance nightmare.

Scalability is the ability to handle additional load by adding more computational resources.  Performance is not scalability, however, improving system performance mitigates to some degree the need for scalability.  Performance is the number of operations per unit of time that a system can handle (e.g., words / second, pages served / day, etc.).  There are two types of scalability – vertical and horizontal.

Vertical scalability is achieved by by adding more power (more RAM, faster CPU) to a single machine.  Vertical scalability typically results in incremental improvements.  Horizontal scalability is accommodating more load by distributing processing to multiple computers.  Where vertical scalability is relatively trivial to implement, horizontal scalability is much more complex.  Conversely, horizontal scalability offers theoretically unlimited capacity.  Google is the classic example of infinite horizontal scalability using thousands of low-cost commodity servers.

If you have the luxury of working off of a blank sheet of paper or have the flexibility to implement a major new technology stack some of the better solutions for implementing scalability include ActiveMQ, and Hadoop. Microsoft’s AppFabric Service Bus promises capability in this area for Azure hosted applications. Many times scalability was considered when an application was first created but has proven to be inadequate for current demand.  The following are suggestions for improving an existing application’s scalability.

Microsoft’s Five Commandments of Designing for Scalability

  • Do Not Wait– A process should never wait longer than necessary.
  • Do Not Fight for Resources – Acquire resources as late as possible and then release them as soon as possible.
  • Design for Commutability– Two or more operations are said to be commutative if they can be applied in any order and still obtain the same result.
  • Design for Interchangeability – Manage resources such that they can be interchangeable (i.e., database connection).  Keep server side components as stateless as possible.
  • Partition Resources and Activities – Minimizing relationships between resources and between activities

Microsoft’s Best Practices for Scalability

  • Use Clustering Technologiessuch as load balancers, message brokers, and other solutions that implement a decoupled architecture.
  • Consider logical vs. physical tierssuch as the model view controller (MVC) architecture.
  • Isolate transactional methodssuch that components that implement methods that implement transactions are distinct from those that do not.
  • Eliminate Business Layer State such that wherever possible server-side objects are stateless.

Shahzad Bhatti’s Ten Commandments for Scalable Architecture

  1. Divide and conquer – Design a loosely coupled and shared nothing architecture.
  2. Use messaging oriented middleware (ESB) to communicate with the services.
  3. Resource management – Manage http sessions and remove them for static contents, close all resources after usage such as database connections.
  4. Replicate data – For write intensive systems use master-master scheme to replicate database and for read intensive systems use master-slave configuration.
  5. Partition data (Sharding) – Use multiple databases to partition the data.
  6. Avoid single point of failure – Identify any kind of single point of failures in hardware, software, network, power supply.
  7. Bring processing closer to the data – Instead of transmitting large amount of data over the network, bring the processing closer to the data.
  8. Design for service failures and crashes – Write your services as idempotent so that retries can be done safely.
  9. Dynamic Resources – Design service frameworks so that resources can be removed or added automatically and clients can automatically discover them.
  10. Smart Caching – Cache expensive operations and contents as much as possible.


Comparing IE9, Firefox 5, and Chrome 12

July 31, 2011

Since the release if IE9 in March of 2011 Microsoft has been claiming that Internet Explorer is the best browser for business. They recently commissioned Forrester Research to help justify and quantify this statement. The actual study, “The Total Economic Impact of Windows Internet Explorer 9,” assessed the ROI six large organizations experienced upgrading from IE8 to IE9. Firefox and Chrome are never mentioned in the Forrester study. I accept the argument that IE is more optimized for a homogenous enterprise environments than Firefox and Chrome. I don’t think it’s fair or even accurate to imply that IE is the best browser.

I regularly use all three mainstream browsers and have a pretty good sense of their strengths and weaknesses. I did a bunch of real-world testing trying various sites, configuring the layout of the browsers, and looking at different features. In the end, with the exception of things that matter only to a relatively small audience, I came away feeling that all three are all more than adequate for day-to-day business usage.

Internet Explorer

IE9 has come a very long way. Its predecessors were buggy, full of security problems, and lacked standards compliance. IE9 is a fine, feature rich browser. I mainly use IE to access internal sites based on SharePoint (to take advantage of the tight integration with the operating system) and to access email remotely via Outlook Web Access (using an ActiveX control). I don’t think IE is as good as either Chrome or Firefox (largely because of the lack of a rich extension / add-on marketplace) but the gap is rapidly closing.











Strengths Weakness
ActiveX Controls Lack of rich extensions – possibly because it’s harder (big thing)
OS-level integration (i.e., SharePoint) Lack of spell checker (small thing)


I am a long-time Firefox user; it has been my main browser for as long as I can remember. Initially I started using Firefox because IE was so problematic. Five years ago Firefox had unique and innovative features like tabs, extensions, and was much more secure than IE. What really hooked me on FF was the adblock plus extension. I cannot remember the last time I saw an ad in Firefox never mind clicked on one. Over the past year I’ve become more conscious of how long it takes for Firefox to start-up, load web pages, and how much memory it consumes. Part of my speed and memory problems are no doubt related to the extensions I have running.











Strengths Weakness
Rich extensions marketplace Memory Management


Over the past year I’ve started to use Chrome more and Firefox less. What first grabs you about Chrome is the out-of-the-box speed. The application loads nearly instantly, pages render quickly, and in general the application feels zippy. Chrome has many of the nice features of Firefox but without the bloat. Most importantly for me most (but not all) extensions I use regularly are now available for Chrome. The extensions I use are Adblock plus, autocopy, Firebug, Instapaper, LastPass, StumbleUpon, UserAgentSwitcher, and Xmarks. One really nice under the hood feature that is unique to Chrome is that you don’t need to restart the browser after installing an extension. The other thing I’ve noticed is that Chrome seems to do a good job of memory management and distributing applications into separate processes. In my unscientific test of IE, Firefox, and Chrome, IE uses the least amount of memory. On the other hand, IE doesn’t have the add-ons/extensions that Firefox and Chrome do so the test is not really fair.

Besides the inertia of switching from one browser to another, what has held me back from hopping on Chrome bandwagon is the overhead of learning its development tools. I’ve become very attached to Firebug which is Firefox specific and I fairly regularly use the UserAgentSwitcher add-on which is not available for Chrome. Interestingly John Barton, the lead firebug developer, has recently joinedthe Google Chrome team.

Chrome 12










Strengths Weakness
Fast Memory Management
  Lack of user agent switcher add-on


For the moment I am “stuck” using all three browsers. Stuck really isn’t the right word for it. Until IE gets a richer extension model I probably won’t be using it unless I have to. Unless something unexpected happens I do expect that that Chrome will become my everyday browser.

CloudForce Boston – June 2011

July 4, 2011

I recently attended the CloudForce 2011 event in Boston.  Marc Benioff spoke for slightly more than an hour with minimal slideware and no notes.  Marc is an amazing speaker and I suspect much of what we saw in Boston will get re-used at the Dreamforce keynote in August.  Clouldforce was a great opportunity for me to piece together in my mind the (SFDC) story.

Salesforce now has a very interesting collection of technologies – 4 families if you will:

  • Tools for Sales– “Sales Cloud” – pipeline management and Jigsaw – leads
  • Tools for the supporting Customers – “Remedyforce” – generic customer support and “Service Cloud” – help desk in the cloud
  • Social – Chatter – Twitter for business and Radian6 – social network monitoring
  • Platform as a Service (PaaS) – – applications for extending SFDC, – database in the cloud, Heroku – Ruby on Rails in the cloud

The tools for Sales and Support evolved organically from SFDC’s roots as a CRM company.  The PaaS applications, less Heroku, evolved from the platform that Salesforce built to support the CRM business.  The social stuff is all new and was the focus of much of Marc’s talk.

Social Enterprise

My sense is that it’s the social networking pieces that really excite Benioff personally.  Benioff paints a compelling vision of (the company) becoming the engine of the social enterprise.  The argument in favor of becoming a social enterprise is based on how dominant social networking (read Facebook) has become to the consumer.  Indeed he asserted that more people are using social networks than email.  He used Facebook as an opportunity to cite McKinsey research on the consumerization of IT.  The key point is for the first time the technology consumers use in their personal lives is now driving enterprise IT strategy.  Today’s consumers expect to be able to interact with brands on their iPhones and via social networks.  Companies ignore this dynamic at their peril.  Indeed as seen in this video he is positioning his companyas the defacto expert in B-to-B social networks.  Chatter (enterprise Twitter) and Radian6 (social network monitoring) are’s beachheads into the world of the Social Enterprise.

Platform as a Service

Using SFDC as a PaaS provider makes a ton of sense if your organization already has (or will have) a significant investment in their CRM products.  Its logical that Salesforce would make their PaaS offerings available for anyone to use, however, this is a crowded space with strong established offerings available from Amazon, Microsoft, and Google.  I am not sure I fully appreciate the Heroku tie in.

Disclaimer: I am a basically a Microsoft .Net guy but am a very big fan of Ruby on Rails.  I’ve written applications and have done more than the prototypal hello world application in a bunch of languages including PHP, Ruby on Rails, and Java.  I’ve never been wild about the tools (Apex Code and Visualforce) that Salesforce gives you as a developer to natively integrate with their platform.  Apex code essentially allows developers to write stored procedures for the database where Visualforce is a presentation layer language for their system.  Both of these languages have roots in Java and are based on the model view controller (MVC) design pattern.  My basic problem with the SFDC languages is that they are SFDC specific and there is a learning curve with any new language.  (There is amazing developer support for just about any language you can imagine to interface with SFDC but only applications written in Visualforce or Apex can run on the SFDC servers.)  If you talk to the CRM folks they will tell you that this specialization was needed to achieve the kind of performance and deep integration they wanted.  At the time that these tools were announced I wondered why they couldn’t do what they wanted with Java.  Now SFDC buys Heroku which allows you to run Ruby on Rails in the Cloud.  Ruby on Rails is another MVC-based application and is the “hot” open source language.  How amazing would it be if you could write a application that would run on the SFDC servers in RoR.

Some amusing quips from Cloudforce

  • Beware of the false cloud – a cloud is not real if it is not public
  • SharePoint is like your grandfather’s attic – what I put in I can never find

Amazon Web Services – Ready for Primetime

January 11, 2011

Netflix is in the process of moving to the Amazon cloud. There are a couple of really good blog posts by John Ciancutti who is Netflix’s VP of Personalization Technology. I find it very interesting that an operation as big as Netflix would be investing as much as they are into AWS. Thjs is how I read it.

This is a huge leap of faith. While I am not an investment analyst and haven’t studied Netflix’s proxy statements I am a customer. From what I can see Netflix is slowly walking away from the DVD in the mail business and moving to the video on demand model. The old way of doing business would have said that ‘our web presence is too core to our business to outsource to a third party’.

This is a tremendous vote of confidence for Amazon. I am sure that Netflix did not go into this partnership with Amazon without thorough due diligence. In this business there is no one-size fits all model and “your mileage will vary,” however, I interpret Netflix’s decision as a statement that the Amazon cloud is ready for primetime. I’ve heard this same message from other people that I know as well.

There was a Window of opportunity. According to the Netflix post they “needed to re-architect” anyhow so the timing apparently worked out to optimize their software to integrate into AWS.

The cost equation is starting to make sense. Every time I’ve looked at AWS I’ve come away thinking that it’s really expensive. A friend of mine observed the following: “they (Netflix) are large enough that they could probably save millions by building it themselves.  However, there would be opportunity cost of putting some of their best people onto scalability instead of feature development.”

AWS is no silver bullet. There are stories out there like Instagram which over night explodes to 1M users. The only way a product like that could have possibly scaled to accommodate that much traffic is with a solution like AWS/EC2. That said, committing an existing business to a cloud-based solution is very much a strategic decision and requires management commitment.

I think about AWS as the established incumbent in the space. They aren’t by any means the only serious player.  I’ve heard good things about Rackspace’s CloudServer offering, and that Windows Azure is an increasingly interesting technology that will be very meaningful – particularly to folks that write code using the Microsoft stack.


Get every new post delivered to your Inbox.

Join 82 other followers