Agile Pre-mortem Retrospectives

June 6, 2014

Failure is Your Friend is the title of the June 4, 2014 Freakonomics Podcast.  The podcast interviews cognitive psychologist Gary Klein.  Klein talks about an interesting technique called the pre-mortem.  “With a pre-mortem you try to think about everything that might go wrong before it goes wrong.”  As I was listening to Klein talk about how this might work in the physical world and medical procedures it occurred to me that this might be a nice compliment to an agile software development project.

Most scrum teams do some type of post-mortem after each sprint.  Most of the literature today calls these activities retrospectives which has a more positive connotation.  (Taken literally post mortem means occurring after death in Latin.) After training exercises the Army conducts after action reviews, affectionately called “AARs.”  For informal AARs (formal AARs have a proscribed format that is expected to be followed) I always found three questions elicited the most participation – what went well, what did not go well, and what could have been done better.  This same format is often effective in sprint retrospectives.

A pre-mortem retrospective would follow a very different format.  It asks the participants to fast forward in time after the release and assume that the project was a failure.  Klein’s suggestion is to take two minutes ask each participant to privately compile a list of why the project failed.  He then surveys the group and compiles a consolidated list of why the project failed.  Finally, after compiling the master list he would ask everyone in the room to think up one thing that they could do to help the project.  Ideally the team is more attuned to what could go wrong and willing to engage in risk management.

In concept the idea makes a ton of sense.  I can see how it would force the team to be honest with themselves about risks, temper over confidence, and ultimately be more proactive.  On the other hand a pre-mortem is one more meeting and one more activity that is not directly contributing to the project.  I question if there is enough value to do a pre-mortem on every sprint, however, for major new initiatives it could be a useful activity.  I quickly found two references on this topic.

http://www.slideshare.net/mgaewsj/pre-mortem-retrospectives

http://inevitablyagile.wordpress.com/2011/03/02/pre-mortem-exercise/


Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.


Economic Impact of Cloud Computing

March 23, 2014

In January I attended a webinar by Forrester analysts James Staten and Sean Owens on the economics of cloud computing.  The slides and recording of the presentation are linked here and here.  Although the research was commissioned by Microsoft and thus had an Azure flavor, many of the insights are equally applicable to other cloud platforms such as Amazon or Rackspace.  What was unique about this presentation was that it was targeted at economic decision makers vs. technology leaders.  The presentation is well done and is insightful.

The first part of the presentation has a good basic introduction to the economics of cloud computing.  With a public cloud you only pay for what you use, cloud platforms allow for development teams to self-provision, and perhaps most importantly, a public cloud allows an organization to instantly add capacity should demand materialize.

Although I am a huge proponent of public clouds there are pitfalls.  Some of the economic reasons NOT to use a public cloud include:

  • Ease of running up a giant bill.  There are horror stories of teams running load tests that accidentally leave a test server enabled and incur what Forrester referred to as the “shock bill.”  An, active, yet idle VM costs the same as one that is being used.
  • Not using existing investment.  The ease of using a public cloud makes it easy for local teams to “roll their” own in lieu of using existing infrastructure that has already been provided for them by their IT group.
  • Loss of control.  Public clouds are just that – public, and may not have the same level of security found in a hosted solution.  While the cloud provider may ensure that the operating system is fully patched they cannot guard against weak administrative passwords.
  • OpEx vs. CapEx.  Public clouds are almost always considered an operating expense where an on-premise solution is typically capitalized.

These issues should NOT be used as excuses to avoid public clouds.  Indeed there are “patterns” and best practices to ensure that the organization is making the best possible choices.

The second part of the presentation was insightful in that it culled out some of the indirect benefits or perhaps less obvious reasons to use a public cloud.  For example,

  • Cloud computing is cool.  Developers want to work on cutting edge technologies and availing them access to Azure / EC2 helps with recruiting and retention.
  • Public clouds push local IT groups.  Some organizations struggle with non-responsive / bureaucratic IT groups.  The ability to “outsource” hosting (for public or staging servers) puts pressure on IT departments to provide better customer service.
  • Geographic distribution.  Although not discussed extensively this presentation, services like Azure and EC2 allow organizations to host in a geographically distributed way.  This capability, particularly when combined with a caching strategy, is useful for improved performance for global customers and ensuring that an organization is compliant with local privacy regulations.

Forrester is a very careful organization and the economic claims in the presentation are no doubt true.  On the other hand some organizations may find that their “mileage will vary.”  Cloud computing is nearly always more cost effective than buying new physical servers.  A hybrid public/private hosting strategy, levering platforms such as VMWare and Microsoft’s own Hyper-V virtualization system, could be more cost effective for established entities with an existing investment in on-premise hosting.  Private cloud platforms support the convenience of self-service and dynamic growth but have minimal operational costs.  (There is a fixed, typically capitalized, cost to host the server.)  This type of solution is very appropriate for development and QA groups that need non-production servers.

Bottom-line for a fast growing start-up there is nothing better than having no hosting footprint yet being able to scale on demand.  For established entities the public cloud also makes sense but should be considered in the context of existing investments and used where it makes sense.


Scaled Agile Framework (SAFe)

December 27, 2013

Implementing agile methods at higher levels, where multiple programs and business interests often intersect, has always been a challenge.  Consultant Dean Leffingwell, formerly of Rally Software and Rational Software, created a project management framework called the Scaled Agile Framework (SAFe) for applying agile principles at the enterprise level.

Scaled Agile Framework

At a high level SAFe is set of best practices tailored for organizations to embrace agile principles at the portfolio level.  Conceptually SAFe creates a framework whereby there is an integrated view and coordination between multiple different projects.  NB: The graphic on SAFe home page (see screenshot above) is clickable and itself is a terrific agile reference in of itself.

One of the best things about agile methodologies is that it is lightweight and self-directed.  High-level systems run the risk that they have more overhead than value.  On the other hand nearly every organization that has more than one product has the need for an integrated view of how projects fit together.  Indeed, it is not unusual to see senior managers disconnected from day-to-day operations struggle to see how pieces fit together or attempt to make invalid comparisons between teams such as story point velocity.

At the end of 2013 two of the market leaders in application life cycle management (ALM) are Rally Software and Microsoft.  Both Rally and Microsoft’s Team Foundation System (TFS) have wholeheartedly embraced the notion of portfolio management in the latest iterations of their respective products.

Rob Pinna of the Rally Development team has a great analysis of the SAFe here.  Similarly InCycle Software, a Microsoft Gold Certified ALM partner, recently did a webinar highlighting a customized version of a TFS template they used to demo the capabilities of TFS to support SAFe.


2013 Pacific Crest SaaS Survey

October 20, 2013

In September David Stock of Pacific Crest (Investment Bank focusing on SaaS companies) published the results of their 2013 Private SaaS Company Survey.   The survey covers a variety of financial and operating metrics pertinent to the management and performance of SaaS companies, ranging from revenues, growth and cost structure, to distribution strategy, customer acquisition costs, renewal rates and churn.  Here are some top-line observations from me:

  • 155 companies surveyed with median: $5M revenue, 50 employees, 78 customers, $20K ACV, and a global reach
  • Excluding very small companies, median revenue growth is projected to be 36% in 2013
  • Excluding very small companies, the most effective distribution mode (as measured by growth rate) is mixed (40%), followed by inside sales (37%), field sales (27%), and internet sales (23%)
  • Excluding very small companies $0.92 was spent for each dollar of new ACV from a new customer, where it was only $0.17 to upsell
  • The median company gets 13% of new ACV from upsells, however, the top growers upsell more than slower ones
  • Companies which are focused mainly in enterprise sales have highest levels of PS revenue (21%) with an median GM of 29%
  • Median subscription gross margins are 76% for the group
  • Approximately 25% of companies make use of freemium in some way, although very little new revenues are derived here.  Try Before You Buy is much more commonly used: two-thirds of the companies use it and many of the companies of those derive significant revenues from it
  • The average contract length is 1.5 years with quarterly billing terms
  • Annual gross dollar churn (without the benefit of upsells) is 9%

The full survey can be accessed here.


Thoughts on SDET

September 8, 2013

I was recently approached by a colleague about the concept of Software Development in Test.    These are developers who are building software used for testing.  Essentially the argument is that we need to move away from the idea of having two separate QA functions – a manual QA team and an automation team.  The “industry” (Microsoft most prominently) is moving towards 100% automation and QA engineers are now called “Software Engineers in Test” or SDET.

I reached out to a former colleague in Bangalore about his experience managing a group in Microsoft QA.  (He’s since left there and presently is a lead at another prominent organization.).  Here is what he told me:

MS has the concept of SDET ie software development engineer in test. What makes this unique is the blend of technical knowledge (language an coding skills) along with testing domain knowledge which would allow this role to contribute extensively in designing in house automation tools, frameworks, carry out white box testing at the code level and contribute actively to automation and performance testing.

I then did some reading on my own about SDET and learned a bit from the web.  Here are some of the links that I read:

My very first job was testing printers for Digital.  A lot of what we did was to write code that exercised every single mode that the printer would do.  For example, we wrote code in Pascal that generated PostScript by hand.  Some of our programs were added to a test automation tool called the “creaker.”  Others had to be run on their own.  This was 90% black box testing and we did miss some things but we were software engineers doing testing.  I get what you are saying that you want testers to be looking at the code and writing unit tests in addition to black box testing.

I come away from all of this thinking that SDET is really hard to pull off without serious organization commitment.  I could see it working if there was a larger test organization where this concept was institutionalized or if we had the testers reporting to Development.  On the other hand testing is not as effective as it needs to be. There is never enough automation (and enough people doing it) and more problematic product knowledge is typically lacking.


Bundling and Minification

September 1, 2013

Found a great post from from Rick Anderson about Bundling and Minification in .Net 4.5.  From the blog post:

Bundling is a new feature in ASP.NET 4.5 that makes it easy to combine or bundle multiple files into a single file. You can create CSS, JavaScript and other bundles. Fewer files means fewer HTTP requests and that can improve first page load  performance.

The (basic) implementation is fall down easy.  Create a BundleConfig class like so…

Capture

Then reference it from Global.asax.cs like this….

Capture


Follow

Get every new post delivered to your Inbox.

Join 63 other followers