Thoughts RightScale Annual State of the Cloud Report

May 2, 2015

In January of 2015 cloud portfolio management company RightScale Inc. surveyed 930 users of cloud services for their Annual State of the Cloud report.  The findings are both interesting and insightful.  Several key findings are highlighted here.

  1. Cloud is a given and hybrid cloud is the preferred strategy. According to the survey 93% of respondents are using the cloud in one way or another.  Further more than half (55%) of enterprises are using hybrid clouds – either private clouds or an integration with on premise solutions.
  • Savvy start-ups realize that public clouds can be expensive relative to self-hosting in an economy co-lo facility. Until traffic ramps to the point where the ability to immediately scale justifies it there is no urgency to host in AWS or Azure.
  • Public clouds are ideal for a variety of scenarios – unknown, unpredictable, or spiking traffic, the need to host in a remote geography, or where an organization has other priorities than to focus on hosting. Conversely self-hosting can be more economical.  Example, Amazon c3.2xlarge – 8 vCPU and 16 GB RAM (as of May 2015) is $213 / month or approximately $2500 / month / per server.  Organizations who already have an investment in a data center or have on premise capacity often find it cost-effective to self-host for internal applications.
  • Many enterprises are not surprisingly reluctant to walk away from significant capital investments in their own equipment. Hybrid clouds allow organizations to continue to extract value from these investments for tasks that may be difficult or costly to implement in a public cloud.  For example, high security applications, solutions which must interact with behind the firewall systems, or processing / resource intensive programs.

93% of Respondents Are Using the Cloud

  1. DevOps rises; Docker soars. DevOps is the new agile.  It is the hip buzz word floating around every organization.  According to Gene Kim, author of the Phoenix Project, DevOps is the fast flow of code from idea to customer hands.  The manifestation of DevOps is the ability to release code as frequently as several times a day.  To achieve this level of flexibility organizations need to eliminate bottlenecks and achieve what Kim calls flow.  Tools like Puppet, Chef, and Docker are enablers for DevOps.  In forthcoming surveys it can be expected that Microsoft’s InRelease (part of Visual Stuido Online) and Hyper-V Containers will have prominent roles in organizations that use the Microsoft stack.

DevOPs Adoption Up in 2015

  1. Amazon Web Services (AWS) continues to dominate in public cloud, but Azure makes inroads among enterprises. AWS adoption is 57 percent, while Azure IaaS is second at 12 percent.  (Among enterprise respondents, Azure IaaS narrows the gap with 19 percent adoption as compared to AWS with 50 percent.)  This is consistent with other market surveys – see Synergy Research Group Study from October 2014.
  • At this point the market has effectively narrowed to only two major cloud IaaS providers Amazon and Azure. While there are other offerings from Rackspace, IBM, HP and other non-traditional sources (i.e., Verizon) these seem to be solutions for organizations who already have a relationship with this vendor or there is a specific reason for going away from the market leaders.
  • There are certainly many other PaaS solutions including Google, Salesforce.com, Heroku (owned by SFDC). Similarly there are many SaaS solutions again including Google Apps, NetSuite, Salesforce.com, Taleo, and many other vertical specific solutions.
  • This respondent base is heavily represented by small business – 624 SMB vs. 306 Enterprise. Although Microsoft is working hard to attract start-ups the reality is that today most entrepreneurs chose open source technologies over Windows.  Conversely Microsoft technologies are disproportionately represented in larger enterprise.  While today AWS is the undisputed market leader Azure is growing quickly and can be expected to close the gap.  Microsoft is investing heavily in their technology, is actively reaching out to the open source community, and is making it apparent that they are not satisfied with being an also ran.
  1. Public cloud leads in breadth of enterprise adoption, while private clouds leads in workloads.
  2. Private cloud stalls in 2015 with only small changes in adoption.
  • Private clouds are being used for functional and load testing as well as hosting internal applications (i.e., intranet) where the costs and risks associated with a public footprint do not exist. It makes sense that where in the past organizations would have had “farms” of low end desktop PCs and blade servers in server closets that these types of applications have been moved to private clouds that are hosted on virtualized servers that can be centrally managed, monitored, and delivered to users more cost effectively.
  • It is interesting that the data suggests that the market virtualization infrastructure has matured and is not growing. The market leader in this space continues to be VMWare with Microsoft gaining traction in enterprises.
  1. Significant headroom for more enterprise workloads to move to the cloud. An interesting data point – 68% of enterprise respondents says that less than 20% of their applications are currently running in the cloud.
  • It will be interesting to see how his number changes over time. Reversing the statistic – 80% of enterprise applications are still run on premise.  This could be due to IT organizations heavy investment in capitalized equipment / data center.  It could be that the economics of a public cloud are still too expensive to justify moving to a public cloud.  There could be technical limitations such as security which are holding back cloud adoption.  Finally, there could be organizational prejudices against taking what is perceived as a risk to embrace the public cloud.  Very likely it is all of the above.
  • The role of a visionary CTO is to move their organization forward to embrace new technologies, break down prejudices, and find new and better ways to serve customers. Cloud vendors are working to make it easier for organizations of all sizes to adopt the cloud by lowering cost, increasing security, and providing new features which make management more seamless.
  • While this study does not provide any data on the breakdown of PaaS vs. IaaS it is a reasonable assumption that most enterprise adoption of the cloud is IaaS as this is by and large simply re-hosting an application as-is. PaaS applications on the other hand typically need more integration which in many cases involves software development.  Once done, however, PaaS applications are often more secure, scalable, and extensible as they take advantage of the hosting platform infrastructure.

Cloud Challenges 2015 vs. 2014

Finally, RightScale has a proprietary maturity model which ranks organizations comfort level with using cloud related technologies.  Interestingly the data suggests that nearly 50% of organizations have yet to do any significant work with the cloud.  This data can certainly be expected to change over the next 2-3 years.

Cloud Maturity of Respondents


Using the right database tool

April 27, 2014

Robert Haas, a major contributor and committer on the PostgreSQL project, recently wrote a provocative post entitled “Why the Clock is Ticking for MongoDB.”  He was actually responding to a post by the CEO of Mongo DB “Why the clock’s ticking for relational databases.”  I am no database expert, however, it occurs to me that relational databases are not going anywhere AND NoSQL databases absolutely have a place in modern world.  (I do not believe Haas was implying this was not the case.)  It is a matter of using the right tool to solve the business problem.

As Haas indicates RDBMS solutions are great for many problems such as query and analysis where ACID (Atomic, Consistent, Isolated, and Durable) are important considerations.  When the size of the data, need for global scale, and translation volume grows (think Twitter, Gmail, Flicker) NoSQL (read not-only-SQL) solutions make a ton of sense.

Kristof Kovacs’ comparison has the most complete comparison of the various NoSQL solutions.  Mongo seems to be the most popular document database, Cassandra for Row/Column data, and Couchbase for caching.  Quoting Kovacs – “That being said, relational databases will always be the best for the stuff that has relations.”  To that end there is no shortage of RDBMS solutions from the world’s largest software vendors (Oracle – 12c, Microsoft – SQL Server, IBM – db2) as well many other open source solutions such as SQL Lite, MySQL, and PostgreSQL.

In the spirit of being complete, Hadoop is not a database per se – though HBase is an implementation of Hadoop as a database.  Hadoop is a technology meant for crunching large amounts of data in a distributed manner typically using batch jobs and the map-reduce design pattern. It can be used with many NoSQL database such as Cassandra.


Hosting an MVC3 (with membership) application on EC2

February 4, 2012

One of my side projects was to get an MVC3 application that uses the Razor View Engine and Membership hosted on EC2 running Linux. I found some amazingly helpful resources along the way – particularly from Nathan Bridgewater at Integrated Web Systems.

Step one of the project is to get an EC2 instance prepped and ready.  Basically I followed the cookbook instructions on Bridgewater’s site – Get Started with Amazon EC2, Run Your .Net MVC3 (RAZOR) Site in the Clould with Linux Mono.

The exact commands I used:

Create new AMI ID ami-ccf405a5 and associate elastic IP (xx.xx.xx.xx)
sudo apt-get update &;& sudo apt-get dist-upgrade –y
wget http://badgerports.org/directhex.ppa.asc
sudo apt-key add directhex.ppa.asc
sudo apt-get install python-software-properties
sudo add-apt-repository 'deb http://ppa.launchpad.net/directhex/ppa/ubuntu lucid main'
sudo apt-get update
sudo apt-get install mono-apache-server4 mono-devel libapache2-mod-mono
cd /srv
sudo mkdir www; cd www
sudo mkdir default
sudo chown www-data:www-data default
sudo chmod 755 default
cd /etc/apache2/sites-available/
sudo vi mono-default (see mono-default, change IP address)
cd /etc/apache2/sites-enabled
sudo rm 000-default
sudo ln -s /etc/apache2/sites-available/mono-default 000-mono
sudo mv /var/www/index.html /srv/www/default
sudo vi /srv/www/default/index.html
sudo apt-get install apache2
sudo service apache2 restart
Test in a browser via IP address (you should see the default apache page)

My mono default:

# xx.xx.xx.xx is my Elastic IP address
  ServerName xx.xx.xx.xx
  ServerAdmin myemail@domain.com
  DocumentRoot /srv/www/default
  MonoServerPath xx.xx.xx.xx "/usr/bin/mod-mono-server4"
  MonoDebug xx.xx.xx.xx true
  MonoSetEnv xx.xx.xx.xx MONO_IOMAP=all
  MonoApplications xx.xx.xx.xx "/:/srv/www/default"

    Allow from all
    Order allow,deny
    MonoSetServerAlias xx.xx.xx.xx
    SetHandler mono
    SetOutputFilter DEFLATE
    SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary

    AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript

Step two is to test mono with a simple Asp.net page.  Put this file into /srv/www/default.  Edit with sudo and view via browser at http://xx.xx.xx.xx/test.aspx.

<%@ Page Language="C#" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>ASP.Net Test page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
{
lblTest.Text = "This is a successful test.";
}
</script>
</head>
<body>
<h1>
This is a test page</h1>
<asp:Label runat="server" ID="lblTest"></asp:Label>
</body>
</html>
If problems are encountered check logs in /var/log/apache2/access.log or /var/log/apache2/error.log
Step three is to get MySql installed and tested with this simple application.
sudo apt-get install mysql-server
sudo apt-get install libmysql6.1-cil
CREATE DATABASE sample; USE sample;
CREATE TABLE test (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(25));
INSERT INTO sample.test VALUES (null, 'Lucy');
INSERT INTO sample.test VALUES (null, 'Ivan');
INSERT INTO sample.test VALUES (null, 'Nicole');
INSERT INTO sample.test VALUES (null, 'Ursula');
INSERT INTO sample.test VALUES (null, 'Xavier');
CREATE USER 'testuser'@'localhost' IDENTIFIED BY 'somepassword';
GRANT ALL PRIVILEGES ON sample.* TO 'testuser'@'localhost';
FLUSH PRIVILEGES;

Put this file into /srv/www/default. Edit with sudo and view via browser at

<%@ Page Language="C#" %>
<%@ Import Namespace="System.Data" %>
<%@ Import Namespace="MySql.Data.MySqlClient" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>ASP and MySQL Test Page</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<script runat="server">
private void Page_Load(Object sender, EventArgs e)
{
string connectionString = "Server=127.0.0.1;Database=sample;User ID=testuser;Password=somepassword;Pooling=false;";
MySqlConnection dbcon = new MySqlConnection(connectionString);
dbcon.Open();

MySqlDataAdapter adapter = new MySqlDataAdapter("SELECT * FROM test", dbcon);
DataSet ds = new DataSet();
adapter.Fill(ds, "result");

dbcon.Close();
dbcon = null;

SampleControl.DataSource = ds.Tables["result"];
SampleControl.DataBind();
}
</script>
</head>
<body>
<h1>Testing Sample Database</h1>
<asp:DataGrid runat="server" ID="SampleControl" />
</body>
</html>

Step four is to get the simplest possible MVC3 Razor application functioning on Ubuntu / EC2.  Again Bridgewater has a more detailed explanation of what to do at his website linked here.

  1. Go into Visual Studio 2010 and create a new project MV3 / Razor project making no changes to the default project template.
  2. Build it and locally.
  3. Ensure that these references are set to “copy local”: System.Web.Mvc, System.Web.Helpers, and System.Web.Routing
  4. Copy System.Web.Razor, System.Web.WebPages, System.Web.WebPages.Razor, System.Web.WebPages.Deployment into your application’s bin directory.  You will find these files in in C:\Program Files (x86)\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies
  5. Publish the application to a scratch directory
  6. Copy the published application to your EC2 machine.  I used git bash to tar (tarr –zcvf aws.tar.gz *) the files as Bridgewater recommends but could not get scp to work so I ftp’d the file over.
  7. On the EC2 machine cd /srv/www/default; sudo mv /home/ubuntu/aws.tar.gz; sudo tar –zxvf *.gz; sudo chown –R www-data;www-data *; sudo chmod 755 *; sudo service restart apache2 restart
  8. Confirm working from browser by checking default IP address http://xx.xx.xxx
  9. NB: I had to hit refresh several times before the application would work.

Step five is to use implement membership using MySQL.

  1. On your Windows machine.  Edit the default controller and decorate it with the [Authorize] attribute.
  2. Edit your web.config shown below.  This is where it can get hairy.  If you want to run this locally on Windows you need to install the MySQL connector for .Net and Mono http://dev.mysql.com/downloads/connector/net/.  Make sure that you reference system.web.  On Ubuntu the application uses system.data.  The trick is to add them both so you can run the same code on Ubuntu and Windows.  Also notice that I’ve made database password clear text.  As Nathan notes this is not a good practice.
  3. On the Ubuntu machine Go into MySQL and create a database called membership.
  4. Deploy the application to EC2 and test the application using step 4.
<?xml version="1.0"?>

<!--
 For more information on how to configure your ASP.NET application, please visit
 http://go.microsoft.com/fwlink/?LinkId=152368
 -->

<configuration>
 <connectionStrings>
 <add name="Default"
 connectionString="data source=127.0.0.1;user id=aspnet_user;
 password=secret_password;database=membership;"
 providerName="MySql.Data.MySqlClient" />
 </connectionStrings>

<system.web>
 <compilation debug="true" targetFramework="4.0">
 <assemblies>
 <add assembly="System.Web.Abstractions, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Routing, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 <add assembly="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
 </assemblies>
 </compilation>

<authentication mode="Forms">
 <forms loginUrl="~/Account/LogOn" path="/" timeout="2880" />
 </authentication>

<!--NOTE that "hashed" isn't supported with the public release of MySql.Web 6.3.5 under
 Mono runtime. But I can't bring myself to share sample code that doesn't hash the
 passwords by default. ;) The version included with this sample project is slightly modified to
 allow hashed passwords in Mono. I highly recommend checking out the latest version of
 MySql .NET Connector. http://dev.mysql.com

 Also, I found that you have to rebuild MySql.Data and MySql.Web
 using .NET 4.0 profile if you want it to work with Asp.Net 4.0 under Mono. This is a known bug and should
 be published in upcoming versions of the connector. -->
 <membership defaultProvider="MySqlMembershipProvider">
 <providers>
 <clear/>
 <add name="MySqlMembershipProvider"
 type="MySql.Web.Security.MySQLMembershipProvider, mysql.web"
 connectionStringName="Default"
 enablePasswordRetrieval="false"
 enablePasswordReset="true"
 requiresQuestionAndAnswer="false"
 requiresUniqueEmail="true"
 passwordFormat="hashed"
 maxInvalidPasswordAttempts="5"
 minRequiredPasswordLength="6"
 minRequiredNonalphanumericCharacters="0"
 passwordAttemptWindow="10"
 applicationName="/"
 autogenerateschema="true"/>
 </providers>
 </membership>

<roleManager enabled="true" defaultProvider="MySqlRoleProvider">
 <providers>
 <clear/>
 <add connectionStringName="Default"
 applicationName="/"
 name="MySqlRoleProvider"
 type="MySql.Web.Security.MySQLRoleProvider, mysql.web"
 autogenerateschema="true"/>
 </providers>
 </roleManager>

<profile>
 <providers>
 <clear/>
 <add type="MySql.Web.Security.MySqlProfileProvider, mysql.web"
 name="MySqlProfileProvider"
 applicationName="/"
 connectionStringName="Default"
 autogenerateschema="true"/>
 </providers>
 </profile>

<pages>
 <namespaces>
 <add namespace="System.Web.Mvc" />
 <add namespace="System.Web.Mvc.Ajax" />
 <add namespace="System.Web.Mvc.Html" />
 <add namespace="System.Web.Routing" />
 </namespaces>
 </pages>

<!--Don't forget to update this... I left it open to make it easier to debug.-->
 <customErrors mode="Off"/>
 </system.web>

<system.data>
 <DbProviderFactories>
 <clear/>
 <add name="MySQL Data Provider"
 description="ADO.Net driver for MySQL"
 invariant="MySql.Data.MySqlClient"
 type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data"/>
 </DbProviderFactories>
 </system.data>

<system.webServer>
 <validation validateIntegratedModeConfiguration="false"/>
 <modules runAllManagedModulesForAllRequests="true"/>
 </system.webServer>

<runtime>
 <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
 <dependentAssembly>
 <assemblyIdentity name="System.Web.Mvc" publicKeyToken="31bf3856ad364e35" />
 <bindingRedirect oldVersion="1.0.0.0" newVersion="2.0.0.0" />
 </dependentAssembly>
 </assemblyBinding>
 </runtime>
</configuration>

HTML5 Validation using Yepnope

November 23, 2011

In this months’ MSDN magazine there is an interesting article on Browser and Feature Detection.  What really caught my eye was the piece on Modernizr, the JavaScript library that implements browser feature detection.

Modernizr has built-in detection for most HTML5 and CSS3 features that’s very easy to use in your code. It’s very widely adopted and constantly enhanced. Both Modernizr and jQuery are shipped with the ASP.NET MVC tools.

As HTML5 and CSS3 become more and more prevalent feature detection is increasingly relevant.

A growing number of ready-made “fallbacks” for many HTML5 features, known as shims and polyfills, can ease that burden. These come in the form of CSS and JavaScript libraries or sometimes even as Flash or Silverlight controls that you can use in your project, adding missing HTML5 features to browsers that don’t otherwise support them. The difference between shims and polyfills is that shims only mimic a feature and each has its own proprietary API, while polyfills emulate both the HTML5 feature itself and its exact API. So, generally speaking, using a polyfill saves you the hassle of having to adopt a proprietary API.  The HTML5 Cross Browser Polyfills collection on github contains a growing list of available shims and polyfills.

As .Net developers we are usually insulated from browser incompatibility issues, however, there may be situations where you are not using the .Net framework.  There is a good example of using yepnope (a conditional resource loader) for HTML5 Form Validation at CSSKarma.  I tweaked and re-implemented  the example using a .Net MVC application which you can see below.  A couple of “gotchas”:

· Modernizr is part of the .Net MVC tools that come from Microsoft, however, yepnope is not in the release of the MVC tools that at least I have on my machine.  You will need to download version 2.0 from modernizr.comto get yepnope support.

· The JQuery syntax for working with YepNope takes some getting used to.  Make sure to load JQuery first.

· Visual Studio 2010 does not seem to know about HTML5 and will warn of a validation when you use its attributes. See figure #2.

clip_image002[4]

Figure 1 – MyValidation.js – based on the example at CSSKarma

clip_image004

Figure 2 – HTML5 Source

clip_image006

Figure 3 – Output

References:

http://yepnopejs.com/

http://haz.io/

http://modernizr.com


Best practices for adding scalability

November 11, 2011

My thesis is that a you can’t have a good SaaS application that doesn’t scale.  By definition the need for scalability is driven by customer demand but there is demand and there is DEMAND. A handful of lucky organizations (Google, Twitter, Facebook) are faced with industrial strength volume every minute of every day. Organizations with this type of DEMAND can afford to have entire divisions dedicated to managing scalability. Most people are dealing with optimizing their resources for linear growth or the happy situation where their application (Instragram) catches fire (in some cases overnight). A scalable architecture makes it possible to expand to cloud services such as EC2 and Azure or even locally hosted capacity. Absent a scalable architecture an organization is faced with curating a collection of tightly coupled servers and overseeing a maintenance nightmare.

Scalability is the ability to handle additional load by adding more computational resources.  Performance is not scalability, however, improving system performance mitigates to some degree the need for scalability.  Performance is the number of operations per unit of time that a system can handle (e.g., words / second, pages served / day, etc.).  There are two types of scalability – vertical and horizontal.

Vertical scalability is achieved by by adding more power (more RAM, faster CPU) to a single machine.  Vertical scalability typically results in incremental improvements.  Horizontal scalability is accommodating more load by distributing processing to multiple computers.  Where vertical scalability is relatively trivial to implement, horizontal scalability is much more complex.  Conversely, horizontal scalability offers theoretically unlimited capacity.  Google is the classic example of infinite horizontal scalability using thousands of low-cost commodity servers.

If you have the luxury of working off of a blank sheet of paper or have the flexibility to implement a major new technology stack some of the better solutions for implementing scalability include ActiveMQ, and Hadoop. Microsoft’s AppFabric Service Bus promises capability in this area for Azure hosted applications. Many times scalability was considered when an application was first created but has proven to be inadequate for current demand.  The following are suggestions for improving an existing application’s scalability.

Microsoft’s Five Commandments of Designing for Scalability

  • Do Not Wait– A process should never wait longer than necessary.
  • Do Not Fight for Resources – Acquire resources as late as possible and then release them as soon as possible.
  • Design for Commutability– Two or more operations are said to be commutative if they can be applied in any order and still obtain the same result.
  • Design for Interchangeability – Manage resources such that they can be interchangeable (i.e., database connection).  Keep server side components as stateless as possible.
  • Partition Resources and Activities – Minimizing relationships between resources and between activities

Microsoft’s Best Practices for Scalability

  • Use Clustering Technologiessuch as load balancers, message brokers, and other solutions that implement a decoupled architecture.
  • Consider logical vs. physical tierssuch as the model view controller (MVC) architecture.
  • Isolate transactional methodssuch that components that implement methods that implement transactions are distinct from those that do not.
  • Eliminate Business Layer State such that wherever possible server-side objects are stateless.

Shahzad Bhatti’s Ten Commandments for Scalable Architecture

  1. Divide and conquer – Design a loosely coupled and shared nothing architecture.
  2. Use messaging oriented middleware (ESB) to communicate with the services.
  3. Resource management – Manage http sessions and remove them for static contents, close all resources after usage such as database connections.
  4. Replicate data – For write intensive systems use master-master scheme to replicate database and for read intensive systems use master-slave configuration.
  5. Partition data (Sharding) – Use multiple databases to partition the data.
  6. Avoid single point of failure – Identify any kind of single point of failures in hardware, software, network, power supply.
  7. Bring processing closer to the data – Instead of transmitting large amount of data over the network, bring the processing closer to the data.
  8. Design for service failures and crashes – Write your services as idempotent so that retries can be done safely.
  9. Dynamic Resources – Design service frameworks so that resources can be removed or added automatically and clients can automatically discover them.
  10. Smart Caching – Cache expensive operations and contents as much as possible.

References:


Comparing IE9, Firefox 5, and Chrome 12

July 31, 2011

Since the release if IE9 in March of 2011 Microsoft has been claiming that Internet Explorer is the best browser for business. They recently commissioned Forrester Research to help justify and quantify this statement. The actual study, “The Total Economic Impact of Windows Internet Explorer 9,” assessed the ROI six large organizations experienced upgrading from IE8 to IE9. Firefox and Chrome are never mentioned in the Forrester study. I accept the argument that IE is more optimized for a homogenous enterprise environments than Firefox and Chrome. I don’t think it’s fair or even accurate to imply that IE is the best browser.

I regularly use all three mainstream browsers and have a pretty good sense of their strengths and weaknesses. I did a bunch of real-world testing trying various sites, configuring the layout of the browsers, and looking at different features. In the end, with the exception of things that matter only to a relatively small audience, I came away feeling that all three are all more than adequate for day-to-day business usage.

Internet Explorer

IE9 has come a very long way. Its predecessors were buggy, full of security problems, and lacked standards compliance. IE9 is a fine, feature rich browser. I mainly use IE to access internal sites based on SharePoint (to take advantage of the tight integration with the operating system) and to access email remotely via Outlook Web Access (using an ActiveX control). I don’t think IE is as good as either Chrome or Firefox (largely because of the lack of a rich extension / add-on marketplace) but the gap is rapidly closing.

IE9

 

 

 

 

 

 

 

 

 

Strengths Weakness
ActiveX Controls Lack of rich extensions – possibly because it’s harder (big thing)
OS-level integration (i.e., SharePoint) Lack of spell checker (small thing)


Firefox

I am a long-time Firefox user; it has been my main browser for as long as I can remember. Initially I started using Firefox because IE was so problematic. Five years ago Firefox had unique and innovative features like tabs, extensions, and was much more secure than IE. What really hooked me on FF was the adblock plus extension. I cannot remember the last time I saw an ad in Firefox never mind clicked on one. Over the past year I’ve become more conscious of how long it takes for Firefox to start-up, load web pages, and how much memory it consumes. Part of my speed and memory problems are no doubt related to the extensions I have running.

FF5

 

 

 

 

 

 

 

 

 

Strengths Weakness
Rich extensions marketplace Memory Management

Chrome

Over the past year I’ve started to use Chrome more and Firefox less. What first grabs you about Chrome is the out-of-the-box speed. The application loads nearly instantly, pages render quickly, and in general the application feels zippy. Chrome has many of the nice features of Firefox but without the bloat. Most importantly for me most (but not all) extensions I use regularly are now available for Chrome. The extensions I use are Adblock plus, autocopy, Firebug, Instapaper, LastPass, StumbleUpon, UserAgentSwitcher, and Xmarks. One really nice under the hood feature that is unique to Chrome is that you don’t need to restart the browser after installing an extension. The other thing I’ve noticed is that Chrome seems to do a good job of memory management and distributing applications into separate processes. In my unscientific test of IE, Firefox, and Chrome, IE uses the least amount of memory. On the other hand, IE doesn’t have the add-ons/extensions that Firefox and Chrome do so the test is not really fair.

Besides the inertia of switching from one browser to another, what has held me back from hopping on Chrome bandwagon is the overhead of learning its development tools. I’ve become very attached to Firebug which is Firefox specific and I fairly regularly use the UserAgentSwitcher add-on which is not available for Chrome. Interestingly John Barton, the lead firebug developer, has recently joinedthe Google Chrome team.

Chrome 12

 

 

 

 

 

 

 

 

 

Strengths Weakness
Fast Memory Management
  Lack of user agent switcher add-on

Summary

For the moment I am “stuck” using all three browsers. Stuck really isn’t the right word for it. Until IE gets a richer extension model I probably won’t be using it unless I have to. Unless something unexpected happens I do expect that that Chrome will become my everyday browser.


CloudForce Boston – June 2011

July 4, 2011

I recently attended the CloudForce 2011 event in Boston.  Marc Benioff spoke for slightly more than an hour with minimal slideware and no notes.  Marc is an amazing speaker and I suspect much of what we saw in Boston will get re-used at the Dreamforce keynote in August.  Clouldforce was a great opportunity for me to piece together in my mind the Salesforce.com (SFDC) story.

Salesforce now has a very interesting collection of technologies – 4 families if you will:

  • Tools for Sales– “Sales Cloud” – pipeline management and Jigsaw – leads
  • Tools for the supporting Customers – “Remedyforce” – generic customer support and “Service Cloud” – help desk in the cloud
  • Social – Chatter – Twitter for business and Radian6 – social network monitoring
  • Platform as a Service (PaaS) – Force.com – applications for extending SFDC, Database.com – database in the cloud, Heroku – Ruby on Rails in the cloud

The tools for Sales and Support evolved organically from SFDC’s roots as a CRM company.  The PaaS applications, less Heroku, evolved from the platform that Salesforce built to support the CRM business.  The social stuff is all new and was the focus of much of Marc’s talk.

Social Enterprise

My sense is that it’s the social networking pieces that really excite Benioff personally.  Benioff paints a compelling vision of Salesforce.com (the company) becoming the engine of the social enterprise.  The argument in favor of becoming a social enterprise is based on how dominant social networking (read Facebook) has become to the consumer.  Indeed he asserted that more people are using social networks than email.  He used Facebook as an opportunity to cite McKinsey research on the consumerization of IT.  The key point is for the first time the technology consumers use in their personal lives is now driving enterprise IT strategy.  Today’s consumers expect to be able to interact with brands on their iPhones and via social networks.  Companies ignore this dynamic at their peril.  Indeed as seen in this video he is positioning his companyas the defacto expert in B-to-B social networks.  Chatter (enterprise Twitter) and Radian6 (social network monitoring) are Salesforce.com’s beachheads into the world of the Social Enterprise.

Platform as a Service

Using SFDC as a PaaS provider makes a ton of sense if your organization already has (or will have) a significant investment in their CRM products.  Its logical that Salesforce would make their PaaS offerings available for anyone to use, however, this is a crowded space with strong established offerings available from Amazon, Microsoft, and Google.  I am not sure I fully appreciate the Heroku tie in.

Disclaimer: I am a basically a Microsoft .Net guy but am a very big fan of Ruby on Rails.  I’ve written force.com applications and have done more than the prototypal hello world application in a bunch of languages including PHP, Ruby on Rails, and Java.  I’ve never been wild about the tools (Apex Code and Visualforce) that Salesforce gives you as a developer to natively integrate with their platform.  Apex code essentially allows developers to write stored procedures for the Salesforce.com database where Visualforce is a presentation layer language for their system.  Both of these languages have roots in Java and are based on the model view controller (MVC) design pattern.  My basic problem with the SFDC languages is that they are SFDC specific and there is a learning curve with any new language.  (There is amazing developer support for just about any language you can imagine to interface with SFDC but only applications written in Visualforce or Apex can run on the SFDC servers.)  If you talk to the CRM folks they will tell you that this specialization was needed to achieve the kind of performance and deep integration they wanted.  At the time that these tools were announced I wondered why they couldn’t do what they wanted with Java.  Now SFDC buys Heroku which allows you to run Ruby on Rails in the Cloud.  Ruby on Rails is another MVC-based application and is the “hot” open source language.  How amazing would it be if you could write a force.com application that would run on the SFDC servers in RoR.

Some amusing quips from Cloudforce

  • Beware of the false cloud – a cloud is not real if it is not public
  • SharePoint is like your grandfather’s attic – what I put in I can never find

Follow

Get every new post delivered to your Inbox.

Join 115 other followers