Microsoft has a different approach. Azure may be ahead of Google in terms of customer adoption but many would argue that Google has a much larger scale infrastructure, it just hasn't effectively sold it to customers yet. This might mean that people end up picking the open source library to build on because they know they can eventually move the workloads over to the best platform (Google).
To counter all that, Microsoft has to provide a better open source option so that people pick the product because it has the best features. They may or may not decide to run it on Azure, but it reduces the chances that Google's platform will become the default choice.
CNTK has a big advantage over TensorFlow for people outside of academia: it can take advantage of the power of many servers at the same time. That's important because it's rare that a single computer is powerful enough to handle a real-world artificial intelligence application, such as speech recognition on an app used by millions of people. Internally, Google likely uses TensorFlow on thousands of servers at a time. But the version Google released to the public, Huang says, can't be used in this way.
This is important because Google considers its infrastructure to be a competitive advantage and is probably why the restriction exists in the open source version.
The next logical step for Microsoft would be to integrate it into its own Azure Machine Learning service. Indeed, it may already be more efficient to run CNTK on top of Azure's upcoming GPU Lab product.
It's no longer a race just to have the best proprietary cloud services. Open source is a key strategy behind their adoption, especially for workloads that evolve past personal projects into large scale commercial systems.
Who will get there first: Microsoft or Google?
Sign up for MIS Asia eNewsletters.