Although existing vSphere golden images or VMs can still be leveraged, the process of cloudifying vSphere offers new opportunities to phase in more modern configuration management techniques. Using automated configuration management tools can help in the evolution from golden-image production to an infrastructure-as-code model. This is a key enabler for the shift to the growing DevOps development movement that streamlines development and reduces errors through automation and collaboration.
* Single pane of glass for management, visibility and governance. When virtualization first took over server infrastructure, the savings were astronomical. However, the ease of setting up virtual machines quickly led to VM sprawl, which is the habit of provisioning VMs for users with limited ability to track whether VMs were fully utilized, under-utilized, or dormant. In addition, application owners often request more compute resources than are used because it is difficult to predict in advance the appropriate level required. As a result, companies experience sub-optimal use of resources, with excess capacity often running at 30 percent.
As organizations leverage vSphere environments, AWS, and other clouds to meet infrastructure needs, they must develop an approach to managing and governing across all of the available resource pools.
Companies will continue to leverage familiar vSphere and vCenter tools to help them deploy and manage their virtualized environments. However, a cloud portfolio management solution that works in parallel with existing vSphere management tools can be used to deliver these multi-cloud capabilities. Enterprises can use cloud portfolio management to gain complete visibility and centralized governance across all of their resource pools, while continuing to use existing vSphere tools for management of their underlying virtualized environments.
* Cost and capacity management. As organizations combine both public cloud and on-premises infrastructure, they need to develop new processes and approaches to managing costs and capacity.
Prior to public cloud, organizations purchased on-premises infrastructure as discrete capital expenditures that were managed through detailed business cases followed by lengthy approval processes. While this process provides oversight on costs, it requires companies to accurately forecast capacity requirements and then either over-provision to meet peak capacity or limit application usage in order to smooth out variability in demand.
In addition, if demand was lower than expected, there was no opportunity to reduce these sunk costs, and if demand was higher, new purchases triggered another round of approval processes. Even organizations that chose to outsource data centers were typically locked into expensive, long-term contracts that often did not provide the flexibility to adapt to changing business environments.
With enterprises now using a combination of public cloud and on-premises infrastructure, they will need to master both traditional discrete capacity management approaches and continuous cost management approaches in order to minimize overall spend.
Sign up for MIS Asia eNewsletters.