BI at SaaS Speed

Winston Churchill once said that “difficulties mastered are opportunities won”. His quote is very applicable to the the effort of building BI in the cloud. GoodData announced earlier today that May 2011 was our biggest month ever, so it is good time to look at difficulties and opportunities of BI Cloud in greater detail.

Business Intelligence is a huge opportunity. Even in its current, dysfunctional, on-premise form it is $10B software industry. And on-premise BI is extensive and expensive IT initiative that involves building a complete chain of data integration, data-warehousing, dashboarding and visualizations. On top of the IT efforts comes tricky business part: what to measure, what are the right metrics, how to present them and to whom. And it all has to happen at the speed of business, not at the speed of IT.

This IT/business dichotomy leads to extremely low success rate of BI projects – as much as $7 billion annually is spent on BI undertakings that are considered failures. That’s right – $7 billion worth of BI software ends up sitting on the shelf every year!

On the contrary the SaaS model works best when the product is well defined, customer adoption is fast, satisfaction/loyalty is high and cost of servicing the customer is low (for more information on SaaS metrics please read “Top 10 Laws of Cloud Computing and SaaS” here). This means that the traditional, slow moving, complex and expensive BI will NEVER make it to the cloud. Numerous small and large companies have tried to host their traditional on-premise BI products in the cloud, but SaaS laws are called laws for a reason – these companies either failed already or will eventually fail.

So what is GoodData doing differently to master the difficulties of Cloud BI?

1. Product Definition/Customer Adoption – in order to make customer adoption as quick as possible, we are building a set of BI applications. These apps are templates that contain not only connectors to standard data sources (such as Salesforce, Zendesk and Facebook) but also complete dashboards and reports that incorporate best practices in the form of metrics. Our Sales Analytics app helps you measure predicted revenue. Our Helpdesk Analytics app measures your backlog and resolution times. Our Marketing Analytics app teaches you how to calculate campaign ROI. We’re adding these applications on a weekly basis. You can see the full list of our apps here: http://www.gooddata.com/apps

2. Customer Loyalty – We deliver a complete, managed service to our customers. Our developers, ops and support personnel are making sure that every single data load goes as planned, all reports are loaded correctly and that there are no performance issues. We even publish our Operational & Service Performance here: http://www.gooddata.com/trust

3. Cost of Service – We’ve architected a very different platform that allows us to host a large number customers at a relatively low cost. The platform is so different that we often have a hard time communicating it to the BI analyst community (concepts like REST APIs and stateless services are not part of normal BI nomenclature). And the flexibility built into the platform allows us to move at the pace of business and not the pace of IT: we deliver a new version of GoodData to our customers every two weeks and we make tons of changes to customer projects daily.

Even the fact that we know how many reports we served to our customers in May of 2011 (over 1,000,000) sets us apart. While the old BI industry can only guess the level of adoption and product usage (of shelfware) we actually know. But again, “difficulties mastered are opportunities won”!

Mr. Jassy, Tear Down This Wall!

Andrew Jassy
SVP, Amazon Web Services

Hi Andy,

I am not going to ask you how are you doing. For everyone in the Amazon Web Services eco-system, the last 24 hours have been brutal. But I’d like to share my perspective with you, and offer a couple of suggestions:

I believe that in the long run this will be a positive day for the cloud computing movement. Naysayers seeking evidence to avoid the cloud have new ammunition, those hyping the cloud are experiencing its limitations, and the leading cloud provider, your company, is learning from the major outage the importance of being humble and cooperative.

I also believe that the way AWS behaves needs to change. You built the leading infrastructure-as-a-service provider with a level of secrecy typical of a stealth startup or a dominant enterprise software platform vendor. It works for Apple – they deliver a complete integrated value chain. But it is not your position in the cloud ecosystem. Today’s outage shows that secrecy doesn’t and won’t work for an IaaS provider. Compete on scale and enterprise readiness, and part of readiness is being open about your internal architectures, technologies and processes.

Our dev-ops people can’t read from the tea-leaves how to organize our systems for performance, scalability and most importantly disaster recovery. The difference between “reasonable” SLAs and “five-9s” is the difference between improvisation and the complete alignment of our respective operational processes. My ops people were ready at 1:00 am PT to start our own disaster recovery, but status updates completely failed to indicate the severity of the situation. We relied on AWS to fix the problem. Had we had more information, we would have made a different choice.

This brings me to my last point: communication. Your customers need a fundamentally different level of information about your platform. There are some very popular web sites that try to re-engineer the way AWS operates. These secondary sources – based on reverse engineering and conjecture – provide a higher level of communication than we get directly from the AWS pages. We live in the Twitter, Facebook, Wikipedia and Wikileaks days! There should not be communication walls between IaaS, PaaS, SaaS and customer layers of the cloud infrastructure.

Tear that wall of secrecy down, Mr. Jasse. Tear it down!

Respectfully,

Roman Stanek
CEO and Founder, GoodData (2009 AWS Startup Challenge winner)
@romanstanek
roman@gooddata.com

P.S. I am publishing this letter on my blog. It’s part of open communication between our companies.

With friends like Forrester and Gartner, IBM and SAP don’t need enemies…

[tweetmeme]
The Innovator’s Dilemma by Clayton M. Christensen is my favorite business book – its main idea (disruptive technologies serve new customer groups and “low-end” markets first) was the guiding principle of all my startups. The best part is that even though everybody can read about the power of disruptive technologies, there is no defense against them. Vendors can’t help themselves. They study The Innovator’s Dilemma, pay Christensen to speak to their managers, but their existing customer base and “brand promise” prevent them from releasing products that are limited, incomplete or outright “crappy.” That’s what makes them disruptive. And industry analysts seem to be the only hi-tech constituency that has either never read Christensen, or is still in absolute denial about it. It makes sense: a book claiming that “technology supply may not equal market demand” is heresy for people who spend their lives focused primarily on the technology supply side.

Christensen argues that vendors no longer develop features to satisfy their users, but just to maintain the price points and maintenance charges (can you name a new Excel feature?). But in many cases the vendor decisions are driven more by industry analysts and their longer and longer feature-list questionnaires. The criteria for inclusion into the Gartner Magic Quadrants and Forrester Waves seem to be copied straight from Christensen’s chapter: “Performance oversupply and the evolution of product competition”. Analysts are the best supporters that startups can have: they are being paid by the incumbents to keep them on a path of “performance oversupply”, making them so vulnerable to young vendors “not approved” by the same analysts!

Forester BI analyst Boris Evelson gives us a great example of this point in his blog about “Bottom Up And Top Down Approaches To Estimating Costs For A Single BI Report”. While Boris is a super smart BI analyst, he somehow failed to observe that his price point of $2,000 to $20,000 per report opens a huge space for economic disruption of the BI market. Anybody interested in power of disruptive technology in BI should listen to a recent GoodData webinar with Tina Babbi (VP of Sales and Services Operations at TriNet). Tina described how the economics of Cloud BI enabled her to shift TriNet’s sales organization “from anecdotal to analytical”. This would not be possible in the luxury-good version of BI, where each report costs thousands. Fortunately, Tina is paying less for a year for a “sales pipeline analytics” service delivered by GoodData than the established vendors would charge for a single report.

I hope Boris’ blog post will appear in one of the future editions of The Innovators Dilemma as a textbook example of how leading analysts failed to recognize that established products are being pushed aside by newer and cheaper products that, over time, get better and become a serious threat. And with friends like Forrester and Gartner, the incumbents don’t really need young and nimble enemies…

Will Moore’s law find it’s way to the cloud?

Moore’s Law states that computer system performance/price ratio will double every two years. And that was very much my expectation when GoodData started using Amazon Web Services almost 2 years ago. But I had to wait until today to see Moore’s Law at work: Amazon announced 15% drop of EC2 prices. The price of the small Linux instance was constant at $0.10 per hour for the last two years – now it will be $0.085.

15% in 2 years – not exactly the exponential growth in the performance/price curve that I expected. And I started to wonder why. Here are my two explanations – I believe the second one is more likely:

  1. AWS prices were set way too low to attract developers two years ago. Moore’s Law helped the price to catch up with the real cost of running the cloud.
  2. AWS is a monopoly and Moore’s Law does not apply.

What? Cloud and monopoly? Isn’t utility computing a perfect example of fiercely competitive commodity where the price curve is shaped only by demand/supply? What would Nick Carr say? Unfortunately not. As much as we read about different cloud providers, AWS is the only real provider of “infrastructure as a service” in town. If you don’t want to be locked-in to proprietary Python or .Net libraries there is not that much choice.

Until we will see performance/price of AWS double every two years, we should still wonder about monopolistic pricing.

Please Don’t Let the Cloud Ruin SaaS

Back in the old good days of enterprise software, we did not need to worry about our customers. We delivered bits on DVDs – it was up to the customers to struggle with installation, integration, management, customization and other aspects of software operations. We collected all the cash upfront, took another 25% in annual maintenance. Throwing software over the wall … that’s how we did it. Sometimes almost literally…

I now live in the SaaS world. My customers only pay us if we deliver a service level consistent with our SLAs. We are responsible for deployment, security, upgrades and so on. We operate software for our customers and we deliver it as service.

But there now seems to be a new way how to “throw software over the wall” again. Many software companies have repackaged their software as Amazon Machine Image (AMI) and relabeled them as SaaS or Cloud Computing. It’s so simple, it’s so clever: Dear customer, here is the image of our database, server, analytical engine, ETL tool, integration bus, dashboard etc. All you need it is go to AWS, get an account and start those AMIs. Scaling, integration, upgrades is your worry again. Welcome back to the world of enterprise software…

AMI is the new DVD and this approach to cloud computing is the worst thing that could happen to SaaS. And SaaS in my vocabulary is still Software as a Service…

Small Pieces Tightly Joined: Open Source in the Cloud

It’s not a shock to state that cloud computing will disrupt the business model of commercial software. But how it will affect the open source movement?

The rise of open source is clearly linked to the rise of the web. Buy a commodity piece of hardware, download source code of any of the thousands of open source projects and start to “scratch your own itch”. My Linux box will communicate with your Linux box as long as we stick to some minimal set of protocols. The web is loosely coupled and software can be developed independently in a bazaar style.

It’s not quite as straightforward in the cloud. Clouds are also composed of thousands of commodity PCs, but the cloud operator manages the overall architecture and deployment – power supply, cooling, hypervisors, security, networks and so on. We don’t rely on minimal set of protocols in the cloud. On the contrary the cloud is defined by fairly complex, high level APIs. Even though the actual cloud OS may come from the open source domain, the tightly coupled nature of the cloud prevents users from modifying the cloud software.

There’s a lot of talk today about setting up private clouds with an Open Source Cloud OS, but the idea of private clouds is simply a delusion. Since the owner of private cloud has to purchase all required HW upfront, private clouds don’t provide the main benefit of cloud computing: elasticity. Other people will claim that clouds are not compatible with the open source movement or call it outright ‘stupidity’.

I see two possible solutions to this problem:

Benevolent dictator: Leading cloud providers (Amazon, Google, MSFT) will open-source their complete stack. This means that they would let the community to inspect the code, fix bugs, suggest improvements and define a clear roadmap similar to the Linux roadmap. This will also require a role of benevolent dictator to manage the evolution of the cloud. Given the level of investment required to build and operate the cloud I don’t believe that this is likely scenario.
The new PC: The open source community accepts the cloud as the new HW/OS platform. Instead of building apps on top of x86 platforms (Wintel, Mac…), open source applications would be built on top of Amazon Web Services or Google AppEngine APIs. And these apps would handle the portability of data so that data doesn’t get locked in the cloud.

At the end of the day, cloud computing equals utility and utility creates stability. And a stable set of APIs, protocols and standards is a great place for open source to flourish. The best open source projects grew on top of stable standards: MySQL/SQL, Linux/x86, Firefox/http/HTML. I wonder what will be the most important OSS that will grow on top of the cloud…

BI meets BIS

I am in Miami today and I am speaking at the Innovation World conference organized by Software AG. The title of my speech is Business Intelligence meets Business Infrastructure Soſtware and here are my slides.

Clouds over NYC

I would never expect New York to be the cloud computing hotspot but two hot startups in the cloud space are actually based in NYC: 10gen and Appnexus.