TensorFlow, Google's machine learning framework, was open sourced at the end of 2015 as a standardised way to build deep learning models. Following the exact same playbook as Kubernetes, the goal is to standardise machine learning on a single framework and API. Of course, you can run your own TensorFlow models butGoogle now has a cloud service where you can run them instead.
Not only that, but the TPU announcement demonstrates that Google has optimized dedicated hardware designed specifically to "deliver an order of magnitude better-optimized performance per watt for machine learning." This is a true differentiator because nobody else offers such a service on their cloud platform and with the increasing focus on AI and machine learning, positioning Google Cloud as the best place to run machine learning workloads is important.
Amazon and Azure are still very much in the lead when it comes to cloud services overall, but it is becoming clearer how Google is differentiating its services. This is a long game and you could argue that the war around compute and storage is no longer interesting. What is interesting is Google's work on becoming a platform for machine learning and AI, whether on the consumer side with the other i/o announcements like Allo and Assistant, or the developer side with TensorFlow and TPUs.
Sign up for MIS Asia eNewsletters.