Cloud computing is as much as a mindset as it is a technology strategy

August 21, 2014

An analogy I’ve used in the past is that were Marc Benioff (CEO of Salesforce.com – arguably the most forward leaning organization in the world of enterprise cloud computing) to acquire a business built around a collection of “stove pipe,” on-premise solutions one of the first things he would do is dictate that everything must to be moved to the cloud. Marc Benioff was one of the first CEOs to truly “get it.”  Everything his organization does is viewed from the lens of the cloud.

Increasingly, it is clear that there is a viable and economical cloud computing solution for nearly every situation.

800px-UNIVAC-I-BRL61-0977

Organization Size SecurityReq. Existing Investment IT Group
Entrepreneurs / Sole Proprietorships Basic $ None
Startups Varies $ Developer
Small Business (less than $1M in annual sales) Varies $ Shared
Small-to-medium Business ($1-$10M in sales) Varies $$ Maybe
Mid-sized Companies ($10M-$100M in sales) Medium $$$ Likely
Medium-to-Large Companies ($10M-$100M in sales) High $$$$ Yes
Enterprises ($1B+ in sales) High $$$$$ Definitely

* Columns are meant to represent typical situations

To some there are as many reasons not to embrace the cloud as there are to adopt it.  It is a matter of perspective.  Those positively disposed to a SaaS strategy will find a way.  Those who are not will find reasons (excuses) why it does not make sense, is too costly, or would be too much of a distraction.  Where there is a will there is a way, however, the size of the organization (which is often a proxy for existing investment) reasonably weighs on the level of difficulty of making the leap to the cloud. 

Existing investment – For smaller entities or individuals there may be cheaper alternatives than hosting in the cloud, however, the effort associated with maintaining hardware and the ease of scaling are reasons why a cloud-based solution makes sense over the long term.  For larger entities and enterprises where there is an existing investment in other technologies such as a datacenter that cannot be easily abandoned, a hybrid hosting strategy often makes sense.

Security – A common reason cited not to use a cloud based solution is that customers demand a higher level of security than can be achieved in a public cloud.  Organizations like Salesforce.com and Google (Apps) have demonstrated for all the world to see that even the most sensitive information can be held securely in a public cloud.  Arguably, when properly configured, information in a public cloud is as well if not more securely held than in an on-premise solution.  Other than high security government data there is little information which cannot be “trusted to the cloud.”

Cost – The cost profile of hosting an application in the cloud is indeed different than if hosted locally.  There are less upfront costs and expenses are tied to usage.  For larger entities there is a material difference in how the costs are accounted. Cloud costs are operating expenses that need to be incurred in the current reporting period where much of on-premise costs can be capitalized and depreciated over a number of years.

Complexity – Hosting in a public cloud poses new challenges to individuals and IT organizations which may have become accustomed to owning their own hardware.  The concept of deploying to the cloud, ensuring proper security, scaling, and hundreds of other tasks are new skills that IT organization have to acquire.  Those that do will thrive.  Those that do not will be left behind.

head-in-the-sand

FUD – Fear, Uncertainty, and Doubt is an expression that has been used in the computer industry since the mid-1970s to refer to a technique used by those who seek to stymie the adoption of new technology due to a lack of understanding.  One of the first applications of FUD was by IBM sales people to undermine the confidence of buyers in their competition.  For those with a stake in the status quo moving to the cloud represents at best more work and a worst increased cost, down time, and frustration as they figure their way through new technology.

In reality Salesforce.com would likely never acquire a business so wedded to what they would view as an antiquated technology strategy.  When viewed in this way the degree to which an organization has embraced the cloud can be viewed as a competitive advantage or indeed a disadvantage.


Enterprise Adoption of Cloud Technology

July 29, 2014

Forrester recently published a research note on enterprise adoption of Cloud Technology.  The full report can be downloaded here from Akamai.com (after registration).  As the report was commissioned by Akamai who absolutely is not a neutral third party the results need to be considered with caution.  That said, there are some interesting conclusions.

  • Public cloud use is increasing across a number of business-critical use cases.

This is not a surprise.  Public clouds have become mainstream.  Amazon’s case study page is a who’s who of well-known traditional brand names including Hess, Suncorp, Dole, and Pfizer as well as newer technology oriented companies such as Netflix, Shazam, Airnb, and Expedia.

  • Cloud success comes from mastering “The Uneven Handshake.”

The gist of this point is that organizations have specific requirements (e.g., security, access to behind the firewall data, etc.) which may be incompletely fulfilled by a particular cloud offering.  In order to use a cloud solution it may be necessary to piece together multiple provider solutions together with custom “glue” code.

  • It’s a hybrid world

Most organizations that have been around for a while have an investment in on premise systems.  In addition to providing valuable services that work (think don’t fix what isn’t broken), they are known commodities, and are typically capitalized pieces of equipment/software.  In a perfect world oftentimes it would be cleaner to create a homogeneous configuration all on a cloud platform.  Unfortunately we do not live in a perfect world and many times cloud systems have to be made to co-exist with legacy systems for technical, cost, or other reasons.

One particularly interesting finding is that most enterprises are quite satisfied with their investment in the cloud.  This conclusion is illustrated in the following figure.

How well did your chosen service actually meet key metrics?

Enterprise Considerations

As organizations begin the journey to or expand their operations in the cloud there are a number of important considerations.  Each of these topics stands on their own and literally thousands of pages of documentation exist on each.  Here are some brief overview thoughts.

  • Platform as a Service (PaaS) or Infrastructure as a Service (IaaS)

In a PaaS configuration the provider manages the infrastructure, scalability, and everything other than the application software.  In an IaaS configuration the enterprise who licenses the software has total control of the platform.  There are pros and cons to both PaaS and IaaS.  PaaS can be very appropriate for small organizations who wish to off-load as much of the hosting burden as possible.  PaaS platforms offer organizations less control and less flexibility.  IaaS provides organizations as much control as they would have in a self-hosted model.  The trade off with IaaS is that the organization is responsible for the provisioning and maintenance of all aspects of the infrastructure.  Enterprises new to the cloud may find that there IT group is most comfortable with IaaS as it is much more familiar territory.  As the IT group is the one who answers the panicked call at 2:00 AM there conservative nature can be understood.

  • Picking the right provider

Google AppEngine, Salesforce.com, Heroku, and Amazon Elastic Beanstalk are some on the most well-known PaaS platforms.  Amazon’s EC2 platform as well as Microsoft Azure Virtual Machines are the two dominant platforms in the IaaS space.  (Azure has a rich PaaS offering called Web Sites.)  Rackspace also has very strong offerings as well – particularly in the IaaS space.

  • Platform lock in

With an IaaS model careful consideration should be given to the selection of technology components.  To a point made in the Forrester report interfaces between existing components need to be considered and configured to work together.  Further consideration should be given to whether platform specific technologies should be used or not.  For example, Amazon offers a proprietary queuing solution (SQS – Simple Queue Service).  RabbitMQ is a well-respected open source queuing platform.  The choice of SQS would lock an organization into Amazon where the choice of RabbitMQ allows more flexibility to shift to another platform.  Again these are trade offs to be considered.

  • Security

With enough time and effort public cloud technology can theoretically be made as secure as an on premise solution.  This topic is considered by the Forrester report.  They note “The most common breaches that have occurred in the cloud have not been the fault of the cloud vendors but errors made by the customer.”  Should an organization make the decision to hold sensitive business-critical information in the cloud a best practice would be to retain a subject matter expert in cloud security and conduct regular third-party penetration testing.

  • Global Footprint and Responsiveness

One of the advantages of working with a public cloud provider is that an organization can cost-effectively host their applications around the world.  For example, Amazon offers three hosting options in the Asia Pacific Zone alone and nine regions world-wide.  Hosting in another geography is on the surface attractive for improving response times for customers as well as complying with country specific privacy regulations.  For most organizations hosting in a shared public cloud is much cheaper than self-hosting in a remote geography.  Organizations should be aware that hosting in a given region may or may improve response times depending on how their customers access the service.  Your mileage may vary depending on customer network routing algorithms.  Performance testing using a service like Compuware can help identify how your customers access your content.  Similarly, care needs to be taken to ensure compliance with privacy laws.  For example, it is a well-known requirement that PII data from EU citizens should not leave Europe without the user’s consent.  A public cloud can be used to comply with this directive, however, should administrators from the US have the ability to extract data from that machine the organization may not be meeting the requirements of the law.

  • Uptime and monitoring

Finally, enterprises need to be concerned with up-time.  It is a law of nature that all systems go down.  Even the biggest, most well maintained systems, have unplanned outages.  Nearly every cloud systems has a distributed architecture such that rarely does the entire network go down at the same time.  Organizations should carefully consider (and test) how they monitor their cloud hosted systems and fail-over should an outage occur just as they do with on premise solutions.  Should an organization embrace a hybrid hosting strategy the cloud could fail over to the self-hosted platform and vice versa.


Agile Pre-mortem Retrospectives

June 6, 2014

Failure is Your Friend is the title of the June 4, 2014 Freakonomics Podcast.  The podcast interviews cognitive psychologist Gary Klein.  Klein talks about an interesting technique called the pre-mortem.  “With a pre-mortem you try to think about everything that might go wrong before it goes wrong.”  As I was listening to Klein talk about how this might work in the physical world and medical procedures it occurred to me that this might be a nice compliment to an agile software development project.

Most scrum teams do some type of post-mortem after each sprint.  Most of the literature today calls these activities retrospectives which has a more positive connotation.  (Taken literally post mortem means occurring after death in Latin.) After training exercises the Army conducts after action reviews, affectionately called “AARs.”  For informal AARs (formal AARs have a proscribed format that is expected to be followed) I always found three questions elicited the most participation – what went well, what did not go well, and what could have been done better.  This same format is often effective in sprint retrospectives.

A pre-mortem retrospective would follow a very different format.  It asks the participants to fast forward in time after the release and assume that the project was a failure.  Klein’s suggestion is to take two minutes ask each participant to privately compile a list of why the project failed.  He then surveys the group and compiles a consolidated list of why the project failed.  Finally, after compiling the master list he would ask everyone in the room to think up one thing that they could do to help the project.  Ideally the team is more attuned to what could go wrong and willing to engage in risk management.

In concept the idea makes a ton of sense.  I can see how it would force the team to be honest with themselves about risks, temper over confidence, and ultimately be more proactive.  On the other hand a pre-mortem is one more meeting and one more activity that is not directly contributing to the project.  I question if there is enough value to do a pre-mortem on every sprint, however, for major new initiatives it could be a useful activity.  I quickly found two references on this topic.

http://www.slideshare.net/mgaewsj/pre-mortem-retrospectives

http://inevitablyagile.wordpress.com/2011/03/02/pre-mortem-exercise/


Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.


Economic Impact of Cloud Computing

March 23, 2014

In January I attended a webinar by Forrester analysts James Staten and Sean Owens on the economics of cloud computing.  The slides and recording of the presentation are linked here and here.  Although the research was commissioned by Microsoft and thus had an Azure flavor, many of the insights are equally applicable to other cloud platforms such as Amazon or Rackspace.  What was unique about this presentation was that it was targeted at economic decision makers vs. technology leaders.  The presentation is well done and is insightful.

The first part of the presentation has a good basic introduction to the economics of cloud computing.  With a public cloud you only pay for what you use, cloud platforms allow for development teams to self-provision, and perhaps most importantly, a public cloud allows an organization to instantly add capacity should demand materialize.

Although I am a huge proponent of public clouds there are pitfalls.  Some of the economic reasons NOT to use a public cloud include:

  • Ease of running up a giant bill.  There are horror stories of teams running load tests that accidentally leave a test server enabled and incur what Forrester referred to as the “shock bill.”  An, active, yet idle VM costs the same as one that is being used.
  • Not using existing investment.  The ease of using a public cloud makes it easy for local teams to “roll their” own in lieu of using existing infrastructure that has already been provided for them by their IT group.
  • Loss of control.  Public clouds are just that – public, and may not have the same level of security found in a hosted solution.  While the cloud provider may ensure that the operating system is fully patched they cannot guard against weak administrative passwords.
  • OpEx vs. CapEx.  Public clouds are almost always considered an operating expense where an on-premise solution is typically capitalized.

These issues should NOT be used as excuses to avoid public clouds.  Indeed there are “patterns” and best practices to ensure that the organization is making the best possible choices.

The second part of the presentation was insightful in that it culled out some of the indirect benefits or perhaps less obvious reasons to use a public cloud.  For example,

  • Cloud computing is cool.  Developers want to work on cutting edge technologies and availing them access to Azure / EC2 helps with recruiting and retention.
  • Public clouds push local IT groups.  Some organizations struggle with non-responsive / bureaucratic IT groups.  The ability to “outsource” hosting (for public or staging servers) puts pressure on IT departments to provide better customer service.
  • Geographic distribution.  Although not discussed extensively this presentation, services like Azure and EC2 allow organizations to host in a geographically distributed way.  This capability, particularly when combined with a caching strategy, is useful for improved performance for global customers and ensuring that an organization is compliant with local privacy regulations.

Forrester is a very careful organization and the economic claims in the presentation are no doubt true.  On the other hand some organizations may find that their “mileage will vary.”  Cloud computing is nearly always more cost effective than buying new physical servers.  A hybrid public/private hosting strategy, levering platforms such as VMWare and Microsoft’s own Hyper-V virtualization system, could be more cost effective for established entities with an existing investment in on-premise hosting.  Private cloud platforms support the convenience of self-service and dynamic growth but have minimal operational costs.  (There is a fixed, typically capitalized, cost to host the server.)  This type of solution is very appropriate for development and QA groups that need non-production servers.

Bottom-line for a fast growing start-up there is nothing better than having no hosting footprint yet being able to scale on demand.  For established entities the public cloud also makes sense but should be considered in the context of existing investments and used where it makes sense.


Scaled Agile Framework (SAFe)

December 27, 2013

Implementing agile methods at higher levels, where multiple programs and business interests often intersect, has always been a challenge.  Consultant Dean Leffingwell, formerly of Rally Software and Rational Software, created a project management framework called the Scaled Agile Framework (SAFe) for applying agile principles at the enterprise level.

Scaled Agile Framework

At a high level SAFe is set of best practices tailored for organizations to embrace agile principles at the portfolio level.  Conceptually SAFe creates a framework whereby there is an integrated view and coordination between multiple different projects.  NB: The graphic on SAFe home page (see screenshot above) is clickable and itself is a terrific agile reference in of itself.

One of the best things about agile methodologies is that it is lightweight and self-directed.  High-level systems run the risk that they have more overhead than value.  On the other hand nearly every organization that has more than one product has the need for an integrated view of how projects fit together.  Indeed, it is not unusual to see senior managers disconnected from day-to-day operations struggle to see how pieces fit together or attempt to make invalid comparisons between teams such as story point velocity.

At the end of 2013 two of the market leaders in application life cycle management (ALM) are Rally Software and Microsoft.  Both Rally and Microsoft’s Team Foundation System (TFS) have wholeheartedly embraced the notion of portfolio management in the latest iterations of their respective products.

Rob Pinna of the Rally Development team has a great analysis of the SAFe here.  Similarly InCycle Software, a Microsoft Gold Certified ALM partner, recently did a webinar highlighting a customized version of a TFS template they used to demo the capabilities of TFS to support SAFe.


2013 Pacific Crest SaaS Survey

October 20, 2013

In September David Stock of Pacific Crest (Investment Bank focusing on SaaS companies) published the results of their 2013 Private SaaS Company Survey.   The survey covers a variety of financial and operating metrics pertinent to the management and performance of SaaS companies, ranging from revenues, growth and cost structure, to distribution strategy, customer acquisition costs, renewal rates and churn.  Here are some top-line observations from me:

  • 155 companies surveyed with median: $5M revenue, 50 employees, 78 customers, $20K ACV, and a global reach
  • Excluding very small companies, median revenue growth is projected to be 36% in 2013
  • Excluding very small companies, the most effective distribution mode (as measured by growth rate) is mixed (40%), followed by inside sales (37%), field sales (27%), and internet sales (23%)
  • Excluding very small companies $0.92 was spent for each dollar of new ACV from a new customer, where it was only $0.17 to upsell
  • The median company gets 13% of new ACV from upsells, however, the top growers upsell more than slower ones
  • Companies which are focused mainly in enterprise sales have highest levels of PS revenue (21%) with an median GM of 29%
  • Median subscription gross margins are 76% for the group
  • Approximately 25% of companies make use of freemium in some way, although very little new revenues are derived here.  Try Before You Buy is much more commonly used: two-thirds of the companies use it and many of the companies of those derive significant revenues from it
  • The average contract length is 1.5 years with quarterly billing terms
  • Annual gross dollar churn (without the benefit of upsells) is 9%

The full survey can be accessed here.


Follow

Get every new post delivered to your Inbox.

Join 64 other followers