With the benefit of 20/20 hindsight, I have to ask myself how I - and most other CIOs - deluded ourselves into thinking that hardware was cheap for so long. In the old days, we would let different functional groups in the company fall in love with individual business applications, each of which had
its own unique hardware requirements and middleware architectures. Because hardware expenses were always the smallest fractional piece of any major system acquisition, we would continually oversize the hardware to guard against potential system performance problems. To be perfectly blunt, we didn't really perform capacity management because we didn't want to know the answer: we all intuitively knew that our server and storage farms were significantly under-utilised.
When public cloud providers started to provision IT infrastructure-on-demand on a "pay-as-you-go" basis, we all awoke from our reveries and realised that these new on-demand options would soon displace our in-house operations with their efficiency and reliability. So we started to act like public cloud providers within our own companies. We laid down the law with our business clients and told them which hardware and middleware architectures we would support and which ones we would not. We told them: "If it doesn't run on one of our standard architectures, you can't buy it."
Then we had our day of reckoning with our CFOs. We had to go in and admit that all of that hardware they had allowed us to purchase over the past decade was really just sitting down in the data centre running below 50 per cent of capacity on any given day (except the mainframe, of course). We told them that if they would just let us buy capacity in advance of demand in the future, and avoid incremental purchases tied to the acquisition of major systems, we could achieve much higher rates of return on our IT hardware. We just had to agree to start reporting on capacity utilisation in exchange for the new policy of purchasing capacity in advance of demand.
Security concerns in public clouds
It's amazing how long we fussed and fretted about security concerns in the public clouds back then.
Now, we routinely burst out to public cloud providers to handle specific types of workloads. We characterise the security requirements of each computing workload and use the appropriate encryption or data aliasing techniques to ensure the security of data being passed to the external providers. Although some forms of data are too sensitive to ever leave our company data centre, those workload allocation decisions are now handled through our automated provisioning processes. We don't have philosophical debates about what can and can't be passed outside our firewall. We solved that problem a long time ago.
Sign up for MIS Asia eNewsletters.