Google Cloud TPU Announcement
Today during the Google I/O Keynote, Sundar Pichai, CEO of Google, made a significant Google Cloud TPU announcement. Pichai shared the release of the second generation of Google TPU that is optimized for both training and inference. He also revealed new programs to assist the software development community use these advances on Google Cloud.
— Mike Demler (@MikeDemler) May 17, 2017
Google Cloud TPU Pods
These new TPUs can be stacked into pods of 64. Each Google Cloud TPU board has 4 chips and is capable of 180 teraflops, when grouped into sixty four boards, the new Google Cloud TPU Pods are capable of 11.5 petaflops.
According to Pichai, this is “an important advance in technical infrastructure for the AI era. The reason we named it Cloud TPU is because we are bringing it through to the Google Cloud Platform. Google Cloud TPUs are coming to Google Cloud Engine as of today. We want Google Cloud to be the best cloud for machine learning.”
Pichai also spoke about his desire to make the best processors available on GCP including general CPUs, TPUs and gave a shout out to Nvidia’s recently announced GPUs that Google Cloud will also be making available to its clients.
Following the news of the new hardware now available on the Google Cloud, Pichai also announced Google.ai. Google.ai is a collection of different AI initiatives at Google and will focus on three areas: Research, Tools (TensorFlow & TPUs) and Applied AI.
Google wants to make it easier for a broad range of developers to use Google AI. If Google.ai can help developers solve problems with better AI than what is available on competing public clouds, it may give Google Cloud a differentiated offering to accelerate growth beyond their competitors.