Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Google, Amazon reveal their secrets of scalability

Joab Jackson | Nov. 28, 2013
Internet giants such as Google and Amazon run IT operations that are far larger than most enterprises even dream of, but lessons they learn from managing those humongous systems can benefit others in the industry.

"The point is not to achieve 100 percent availability. The point is to achieve the target availability -- 99.999 percent -- while moving as fast as you can. If you massively exceed that threshold you are wasting money," Underwood said.

"Opportunity costs is our biggest competitor," he said.

The following week at the Amazon Web Services (AWS) re:Invent conference in Las Vegas, James Hamilton, AWS' vice president and distinguished engineer, discussed the tricks Amazon uses to scale.

Though Amazon is selective about what numbers it shares, AWS is growing at a prodigious rate. Each day, it adds the equivalent amount of compute resources (servers, routers, data center gear) that it had in total in the year 2000, Hamilton said. "This is a different type of scale," he said.

Key for AWS, which launched in 2006, was good architectural design. Hamilton admitted that Amazon was lucky to have got the architecture for AWS largely correct from the beginning.

"When you see fast growth, you learn about architecture. If there are architectural errors or mistakes made in the application, and the customers decide to use them in a big way, there are lots of outages and lots of pain," Hamilton said.

 

The cost of deploying a service on AWS comes down to setting up and deploying the infrastructure, Hamilton explained. For most organizations, IT infrastructure is an expense, not the core of its business. But at AWS, engineers focus solely on driving down costs for the infrastructure.

Like Google, Amazon often builds its own equipment, such as servers. That's not practical for enterprises, he acknowledged, but it works for an operation as large as AWS.

"If you have tens of thousands of servers doing exactly the same thing, you'd be stealing from your customers not to optimize the hardware," Hamilton said. He also noted that servers sold through the regular IT hardware channel often cost about 30 percent more than buying individual components from manufacturers.

Not only does this allow AWS to cut costs for customers, but it also allows the company to talk with the component manufacturers directly about improvements that would benefit AWS.

"It makes sense economically to operate this way, and it makes sense from a pace-of-innovation perspective as well," Hamilton said.

Beyond cloud computing, another field of IT that deals with scalability is supercomputing, in which a single machine may have thousands of nodes, each with dozens of processors. On the last day of the SC13 supercomputer conference, a panel of operators and vendors assembled to discuss scalability issues.

William Kramer, who oversees the National Center for Supercomputing Applications' Blue Waters machine at the University of Illinois at Urbana Champaign, noted that supercomputing is experiencing tremendous growth, driving the need for new workload scheduling tools to ensure organizations get the most from their investment.

 

Previous Page  1  2  3  Next Page 

Sign up for MIS Asia eNewsletters.