Cloud computing is an often misunderstood concept that fell victim to Gartner’s hype cycle in a pretty spectacular way. But it is a real thing and it is a game changer (it already has changed many games) and it is here to stay. But there are plenty of blogs and books that can tell you that. What a lot of them don’t tell you is that it is part of something much bigger and much more important.
One of the common criticisms of the hype around cloud computing is that “this isn’t anything new, this is just going back to people logging into big mainframes like they used to do in the 70s!”.
Now it is true in that people did log into big mainframes in the 70s, and that was where the business logic all lived (people in an enterprise worked on dumb terminals that were basically TV monitors). And then in the 1990s everyone got a desktop PC and enterprises went all client-server crazy and put a lot of UI and business logic in software on client machines, and that turned out to be a terrible idea because suddenly you had to configuration-manage all the software deployments on those 20,000 client machines.
And now hey we have SaaS (software as a service) so now client machines just have a browser and go to Salesforce or whatever and the Salesforce servers do all the heavy lifting, so the poor tech support guys now only have to manage the OS and browser for 20,000 client machines, instead of a bulky fat-client CRM program as well. So we’ve gone full circle, right?
There are a number of important ways that we haven’t gone back to the mainframe – terminal model of the 70s. Some are technological, some are business, some are a mixture of both.
The technology of the old mainframe model was simple: one big expensive computer, connected to lots of small dumb cheap computers. The technology model of the new cloud provider model looks the same, and it is supposed to. Cloud providers go to great lengths to make it look and feel like the “big expensive computer”. But it is in fact completely different.
Hiding behind a layer of abstraction is not one vast computer, but thousands (or hundreds of thousands) of small crap computers. The magic of cloud technology is that it is able to make thousands of small crap computers behave as one vast computer. Is there a point to this?
Of course: abstracting and aggregating server resources means you can achieve some important advantages: location independence (your assets can be literally all over the world and you can shift their workload depending on demand, location of customers, and so on), resilience / failover (you can have a hundred servers all break down at once and nobody will even notice), and economies of scale (one hundred crap computers cost less than one supercomputer).
In the old mainframe model, the business owned the mainframes. Or they leased them for a long time for vast sums of money from IBM.
Now, the business has no knowledge of, or concern for, the assets upon which their services are based. The cloud provider owns and looks after all the servers. The enterprise just buys and pays for the clients machines (which are really just running browsers).
This means the business doesn’t have to worry about buying, leasing, managing, fixing, and upgrading expensive servers; the cloud provider does. You could argue that the cloud providers will amortise out those costs and pass them on to businesses in their pricing, but this is not much of a point. By using mutli-tenancy, virtualisation, and economies of scale from commodity hardware, the costs (once amortised out to all customers over time) are orders of magnitude lower per customer than they would be with dedicated on-premise machines.
This one is massive. In the traditional model, a business would pay up-front and buy a big expensive computer. That would then be an asset on their books, that they would depreciate over a number of years, until it reached end-of-life and they fork out another pile of money and do it again.
The big expensive enterprise computer would come with a license for big expensive enterprise software than ran on the computer. The only alternative was to lease it from a vendor, which is pretty much the same (I’ll explain shortly). There are big problems with this model.
The business requires a large up-front payout before they’ve got their hands on the computer or the software and decided if it’s right for them, it’s extremely difficult and expensive to change to another model/vendor, the asset has to be regularly replaced at large expense, the asset and software licensed is paid for no matter how much the asset is used.
If the economy tanks and the business reduces their operations by 50% you don’t get 50% of your money back. The strongest advantage of the cloud model is that it is a rental, pay-as-you-go model, i.e. utility pricing. Hang on, isn’t that leasing, which people often did anyway?
This is a common misconception. Leasing is not same as renting. In fact, it is much more similar to buying than it is to renting. Think about a company looking to get a company car.
They could buy the car, for $50,000 (changing an asset: cash for an asset: car), depreciate it for five years, and then do it again. If they were going to buy it they wouldn’t “save up the money”, they would probably borrow (finance) it. So let’s say with financing costs they would pay $12,000 a year for five years to own a car.
Or they could lease the car. They could pay a company $12,000 a year for five years to have the car. Then the car is worn out and they sign a new lease and do it again. So buy and lease are basically the same.
But there’s another option: you could rent the car. Each time the company needed to use a car, they could go to a car rental company and rent a car for $80 a day. “But that’s crazy!” you’re thinking. “If you rented the car, there are 260 business days in a year which at $80 a day comes to about $20,000!”.
Which would be crazy, if you used the car every day. What if you only use it every second day? And not at all for a few weeks over christmas? It’s now about $10,000 a year. And you also don’t have to pay for repairs, the rental company does. You get a brand new car every day. And if your business goes bad for a while and you hardly need the car at all, you hardly pay anything.
Cloud is like this: you pay for what you use. The price per unit (per user, per day, per hour, whatever) of a cloud offering can be higher than the price per unit of a full purchased offering, even much higher, but it can still be cheaper overall because you only pay for what you use. In the car example, the rental company could charge $1,000 a day to rent the car and it could still be cheaper than buy or lease if you only used the car one week per year. Strangely, the single greatest advantage of cloud computing is not technological: it’s an accounting advantage.
Buying expensive assets and watching them break is not something any smart business likes to do. They will almost always be looking to expense things wherever possible, i.e. change a capital expenditure (capex) into an operating expenditure (opex), even with the tax advantages of depreciation.
It’s common sense: assets are risky, require large up-front investment, devalue quickly, break often, and are usually illiquid. It’s no wonder businesses are attracted to cloud offerings for these reasons. But it’s not just enterprises and it’s not just SaaS and it’s not even just cloud.
We are going through a massive shift in the fundamental structure of our economies: from buy to rent. Some are calling this everything-as-a-service. Why buy a car at all when you can book a car with a company like Go-Get and pay for what you need? Why buy albums or DVDs which you might only use a few times when you can stream content via Spotify or Netflix? Why purchase a license for expensive drawing software when there are cheap or free versions that run in a browser?
We will see more and more things get turned into services over time, and this can only be a good thing.