Economic Impact of Cloud Computing

March 23, 2014

In January I attended a webinar by Forrester analysts James Staten and Sean Owens on the economics of cloud computing.  The slides and recording of the presentation are linked here and here.  Although the research was commissioned by Microsoft and thus had an Azure flavor, many of the insights are equally applicable to other cloud platforms such as Amazon or Rackspace.  What was unique about this presentation was that it was targeted at economic decision makers vs. technology leaders.  The presentation is well done and is insightful.

The first part of the presentation has a good basic introduction to the economics of cloud computing.  With a public cloud you only pay for what you use, cloud platforms allow for development teams to self-provision, and perhaps most importantly, a public cloud allows an organization to instantly add capacity should demand materialize.

Although I am a huge proponent of public clouds there are pitfalls.  Some of the economic reasons NOT to use a public cloud include:

  • Ease of running up a giant bill.  There are horror stories of teams running load tests that accidentally leave a test server enabled and incur what Forrester referred to as the “shock bill.”  An, active, yet idle VM costs the same as one that is being used.
  • Not using existing investment.  The ease of using a public cloud makes it easy for local teams to “roll their” own in lieu of using existing infrastructure that has already been provided for them by their IT group.
  • Loss of control.  Public clouds are just that – public, and may not have the same level of security found in a hosted solution.  While the cloud provider may ensure that the operating system is fully patched they cannot guard against weak administrative passwords.
  • OpEx vs. CapEx.  Public clouds are almost always considered an operating expense where an on-premise solution is typically capitalized.

These issues should NOT be used as excuses to avoid public clouds.  Indeed there are “patterns” and best practices to ensure that the organization is making the best possible choices.

The second part of the presentation was insightful in that it culled out some of the indirect benefits or perhaps less obvious reasons to use a public cloud.  For example,

  • Cloud computing is cool.  Developers want to work on cutting edge technologies and availing them access to Azure / EC2 helps with recruiting and retention.
  • Public clouds push local IT groups.  Some organizations struggle with non-responsive / bureaucratic IT groups.  The ability to “outsource” hosting (for public or staging servers) puts pressure on IT departments to provide better customer service.
  • Geographic distribution.  Although not discussed extensively this presentation, services like Azure and EC2 allow organizations to host in a geographically distributed way.  This capability, particularly when combined with a caching strategy, is useful for improved performance for global customers and ensuring that an organization is compliant with local privacy regulations.

Forrester is a very careful organization and the economic claims in the presentation are no doubt true.  On the other hand some organizations may find that their “mileage will vary.”  Cloud computing is nearly always more cost effective than buying new physical servers.  A hybrid public/private hosting strategy, levering platforms such as VMWare and Microsoft’s own Hyper-V virtualization system, could be more cost effective for established entities with an existing investment in on-premise hosting.  Private cloud platforms support the convenience of self-service and dynamic growth but have minimal operational costs.  (There is a fixed, typically capitalized, cost to host the server.)  This type of solution is very appropriate for development and QA groups that need non-production servers.

Bottom-line for a fast growing start-up there is nothing better than having no hosting footprint yet being able to scale on demand.  For established entities the public cloud also makes sense but should be considered in the context of existing investments and used where it makes sense.


Scaled Agile Framework (SAFe)

December 27, 2013

Implementing agile methods at higher levels, where multiple programs and business interests often intersect, has always been a challenge.  Consultant Dean Leffingwell, formerly of Rally Software and Rational Software, created a project management framework called the Scaled Agile Framework (SAFe) for applying agile principles at the enterprise level.

Scaled Agile Framework

At a high level SAFe is set of best practices tailored for organizations to embrace agile principles at the portfolio level.  Conceptually SAFe creates a framework whereby there is an integrated view and coordination between multiple different projects.  NB: The graphic on SAFe home page (see screenshot above) is clickable and itself is a terrific agile reference in of itself.

One of the best things about agile methodologies is that it is lightweight and self-directed.  High-level systems run the risk that they have more overhead than value.  On the other hand nearly every organization that has more than one product has the need for an integrated view of how projects fit together.  Indeed, it is not unusual to see senior managers disconnected from day-to-day operations struggle to see how pieces fit together or attempt to make invalid comparisons between teams such as story point velocity.

At the end of 2013 two of the market leaders in application life cycle management (ALM) are Rally Software and Microsoft.  Both Rally and Microsoft’s Team Foundation System (TFS) have wholeheartedly embraced the notion of portfolio management in the latest iterations of their respective products.

Rob Pinna of the Rally Development team has a great analysis of the SAFe here.  Similarly InCycle Software, a Microsoft Gold Certified ALM partner, recently did a webinar highlighting a customized version of a TFS template they used to demo the capabilities of TFS to support SAFe.


2013 Pacific Crest SaaS Survey

October 20, 2013

In September David Stock of Pacific Crest (Investment Bank focusing on SaaS companies) published the results of their 2013 Private SaaS Company Survey.   The survey covers a variety of financial and operating metrics pertinent to the management and performance of SaaS companies, ranging from revenues, growth and cost structure, to distribution strategy, customer acquisition costs, renewal rates and churn.  Here are some top-line observations from me:

  • 155 companies surveyed with median: $5M revenue, 50 employees, 78 customers, $20K ACV, and a global reach
  • Excluding very small companies, median revenue growth is projected to be 36% in 2013
  • Excluding very small companies, the most effective distribution mode (as measured by growth rate) is mixed (40%), followed by inside sales (37%), field sales (27%), and internet sales (23%)
  • Excluding very small companies $0.92 was spent for each dollar of new ACV from a new customer, where it was only $0.17 to upsell
  • The median company gets 13% of new ACV from upsells, however, the top growers upsell more than slower ones
  • Companies which are focused mainly in enterprise sales have highest levels of PS revenue (21%) with an median GM of 29%
  • Median subscription gross margins are 76% for the group
  • Approximately 25% of companies make use of freemium in some way, although very little new revenues are derived here.  Try Before You Buy is much more commonly used: two-thirds of the companies use it and many of the companies of those derive significant revenues from it
  • The average contract length is 1.5 years with quarterly billing terms
  • Annual gross dollar churn (without the benefit of upsells) is 9%

The full survey can be accessed here.


Thoughts on SDET

September 8, 2013

I was recently approached by a colleague about the concept of Software Development in Test.    These are developers who are building software used for testing.  Essentially the argument is that we need to move away from the idea of having two separate QA functions – a manual QA team and an automation team.  The “industry” (Microsoft most prominently) is moving towards 100% automation and QA engineers are now called “Software Engineers in Test” or SDET.

I reached out to a former colleague in Bangalore about his experience managing a group in Microsoft QA.  (He’s since left there and presently is a lead at another prominent organization.).  Here is what he told me:

MS has the concept of SDET ie software development engineer in test. What makes this unique is the blend of technical knowledge (language an coding skills) along with testing domain knowledge which would allow this role to contribute extensively in designing in house automation tools, frameworks, carry out white box testing at the code level and contribute actively to automation and performance testing.

I then did some reading on my own about SDET and learned a bit from the web.  Here are some of the links that I read:

My very first job was testing printers for Digital.  A lot of what we did was to write code that exercised every single mode that the printer would do.  For example, we wrote code in Pascal that generated PostScript by hand.  Some of our programs were added to a test automation tool called the “creaker.”  Others had to be run on their own.  This was 90% black box testing and we did miss some things but we were software engineers doing testing.  I get what you are saying that you want testers to be looking at the code and writing unit tests in addition to black box testing.

I come away from all of this thinking that SDET is really hard to pull off without serious organization commitment.  I could see it working if there was a larger test organization where this concept was institutionalized or if we had the testers reporting to Development.  On the other hand testing is not as effective as it needs to be. There is never enough automation (and enough people doing it) and more problematic product knowledge is typically lacking.


Bundling and Minification

September 1, 2013

Found a great post from from Rick Anderson about Bundling and Minification in .Net 4.5.  From the blog post:

Bundling is a new feature in ASP.NET 4.5 that makes it easy to combine or bundle multiple files into a single file. You can create CSS, JavaScript and other bundles. Fewer files means fewer HTTP requests and that can improve first page load  performance.

The (basic) implementation is fall down easy.  Create a BundleConfig class like so…

Capture

Then reference it from Global.asax.cs like this….

Capture


Microsoft’s Azure Stores

August 25, 2013

Microsoft Azure has historically lagged far behind Amazon’s EC2 in the market and in the hearts and minds of most developers.  Azure started out life as a Platform as a Service (PaaS) offering which pretty much no one wanted.  Indeed most developers wanted Infrastructure as a Service (IaaS) – like EC2 has had since day one.  The difference between PaaS and IaaS means that you can deploy and manage your own application in the cloud vs. being constrained to compiled / packaged offerings.  Further Amazon has been innovating at such a rapid pace that pretty much at every turn Azure has looked like an inferior offering by comparison.

In mid-2011 Microsoft moved their best development manager Scott Guthrie onto Azure.  Also working on the Azure project since 2010 is Mark Russinovich arguably Microsoft’s best engineer.  At this point Microsoft truly has their “A” team on Azure and they are actively using it with Outlook.com (replacement for Hotmail) and SkyDrive (deeply integrated into Windows 8).  Amazon EC2 is still the gold standard in cloud computing but Azure is increasingly competitive.  The Azure Store is a step in the direction toward building parity.  The Store was announced in the fall of 2012 at the Build Conference and has come a long way in a short period of time.  By way of reference Amazon has something similar Called the AWS Marketplace.

There are actually two different entities.  The Azure Store is meant for developers and the Azure Marketplace is meant for analysts and information workers.  My sense is that the Marketplace has been around for longer than the Store as it has a much richer set of offerings.  Some of the offerings overlap between the Store and the Marketplace. For example, the Worldwide Historical Weather Data can be access from both places.

Similarities

  • Both have data and applications.
  • Both operate in the Azure cloud

Differences

  • Windows Azure Store: Integration point is always via API
  • Marketplace: Application are accessed via a web page or other packaged application such as a mobile device; Data can be access via Excel, (sometimes) an Azure dataset viewer, or integrated into your application via web services

What is confusing to me why there are so many more data applications in the Marketplace than there are in the store.  For example, none of the extensive Stats Inc data is in the Store.  It may be that the Store is just newer and it has yet to be fully populated.  See this Microsoft blog entry for further details.

I went and kicked the tires of Azure Store and came away very impressed with what we saw.  I saw approximately 30 different applications (all in English).  There are two different types of apps in the store – App Services and Data.  Although I did not write a test application I am fairly confident that both types of applications are accessed via web services.  App Services provide functionality where Data provide information.  In both cases Azure Marketplace apps can be thought of as buy vs. build.

  • App Services: You can think about a service as a feature you would want integrated into your application.  For example, one of the App Services (Staq Analytics) provides real-time analytics for service-based games.  In this case a game developer would code Staq Analytics into their games which in turn would provide insight on customer usage.  Another applications MongoLab provides a No-SQL database.  The beauty of integrating an app from the Azure marketplace is that you as the customer do not ever need to worry about scalability.  Microsoft takes care of that for you.
  • Data: Data application provide on-demand information.  For example, Dun and Bradstreet’s offering provides credit report information, Bing provides web search results, and StrikeIron validates phone numbers.  As with app services Azure takes care of the scalability under load.  Additionally, using a marketplace offering the data is theoretically as fresh as possible.

Further detail on the Store can be found here.

All and all the interface is very clean and straightforward to use.  There is a store and a portal.  Everything in the store appears to be in English though based on the URL it looks like it might be set up for localization.  The portal is localized into 11 languages.  The apps do not appear to be localized – though the Azure framework is localized.    As a .Net developer I feel very comfortable using this environment and am impressed with how rich the interface has become – increasingly competitive with EC2 on a usability basis.

Applications are built using the Windows Azure Store SDK.  There is a generic mailto address for developers to get into contact with Microsoft.  There is also an Accelerator Program which will give applications further visibility in the Azure Store.

It probably not a bad point to highlight, in that Microsoft actually does have a third “store” of a sort called VM Depot (presently in preview mode) which focuses more on the IaaS approach, and the bridging of both “on premise” with “off premise” clouds with Hyper-V and Azure portability.

Finally, Identification technologies are also gaining a lot of focus, striving to unified the experience for hybrid deployments of on premise or hosted IaaS, when combined with Azure PaaS. The ALM model is also starting to be unified so that both Azure and Windows Hyper-V will be delivered by Development teams as defined packages – Databases as DAC’s; Applications as CAB’s / MSDeploy, Sites as WebDeploy / GIT, etc. with many of the features of Azure such as the Service Bus being ported back to Windows Server. Additionally, monitoring services are starting to unify to this model to define a transparent unified distributed service.


Doing things that don’t scale

August 21, 2013

I recently came across a great essay from Paul Graham, one of the the founders of Y Combinator, entitled Do Things that Don’t Scale.  For those of us that operate mostly within the safe confines of an established business Mr. Graham’s guidance is very unconventional.  On the other hand I love the thinking and indeed parts of it are totally applicable to established entities.  Some of the ideas I like most:

  • Startups take off because the founders make them take off.
  • The question to ask about an early stage startup is not “is this company taking over the world?” but “how big could this company get if the founders did the right things?”
  • You should take extraordinary measures not just to acquire users, but also to make them happy. I have never once seen a startup lured down a blind alley by trying too hard to make their initial users happy.
  • It’s not the product that should be insanely great, but the experience of being your user. The product is just one component of that.

There is a great quote from Hannibal to the effect “We will find a way or make one.” Whether you are incubating a new business or improving an established product or service it is up to the leaders to find the way to make “it” happen.


Follow

Get every new post delivered to your Inbox.

Join 61 other followers