The promise of cloud computing is that you, the customer, don't ever have to buy another server, back up another disk drive or worry about another software upgrade. All those promises are true-and now there are multimillion-dollar companies without a single server closet. Cool.
Unfortunately, too often cloud applications and services are bought by people who really shouldn't be buying. Sure, they may have the budget-did you hear Gartner's prediction that the CMO will spend more on tech than the CIO by 2017? - but that doesn't mean they necessarily have the training to make good IT decisions, let alone the discipline or skills in their underlings to actually execute a coordinated technology strategy.
I'm not criticizing user-driven IT. The users should be driving. But, to make IT really work and scale, projects have to be done with skill sets and a focus that are rare in user departments. My entire consulting firm is based on fixing cloud systems that have been put in quickly and managed haphazardly, if managed at all. (Not that I'm not trashing any one vendor here. This article applies to almost any serious cloud service.) This is a clear case of "Pay me now, or pay me (much more) later."
With too many customers, system ownership is effectively set to null almost immediately after a cloud service or application is turned. After all, there's no hardware or software to manage. The service just runs itself. Why dedicate any staff?
That might look like a reasonable economy, but it leads to huge costs soon enough. Why? The system isn't the critical asset. The data is. The data in almost any cloud system is worth far more than whatever you pay on a monthly or even yearly basis. If your data gets gunked up or made meaningless, that could cost you a lot of profits-or even get you in some trouble.
Let's look at three lessons too many companies have learned the hard way by making themselves vulnerable in the cloud.
Lesson #1: The Cloud Never Forgets
Since your data is valuable, you want it to be backed up and replicated for disaster recovery and failover. Any commercial-grade cloud service will be doing this for you. Fine.
Now, think about how the services do it. They have hundreds of thousands of users banging on the same server clusters. Their replication services that run on everybody's data have to constantly stream updates from the local disk array over to the remote disaster recovery site(s). As this happens continuously, their replication has to be optimized for high performance, low latency and quick failover time. But it's nearly impossible to simultaneously optimize that service for ease of file purging.
Sign up for MIS Asia eNewsletters.