Google TPU vs. Nvidia V100
Google as the GPU maker? GPU => Google Processing Unit?
Google 'Cloud TPU' takes machine learning lead from Tesla V100
Mark Tyson on 18 May 2017, 10:01
"...As CNBC reports, the reveal of the Google Cloud TPU last night is "potentially troubling news for Nvidia, whose graphics processing units (GPUs) have been used by Google for intensive machine learning applications."
In its most recent financial report Nvidia pointed to fast growth in revenues from AI and deep learning, and even cited Google as a notable customer.
Now Google has indicated that it will use its own TPUs more in its own core computing infrastructure. Google is also creating the TensorFlow Research Cloud, a cluster of 1,000 Cloud TPUs that we will make available to top researchers for free...
Last but not least Google is happy to help with software and will bring second-generation TPUs to Google Cloud for the first time as Cloud TPUs on GCE, the Google Compute Engine. It will facilitate the mixing-and-matching of Cloud TPUs with Skylake CPUs, NVIDIA GPUs, and all of the rest of our infrastructure and services to build the best ML system.
So how do Google's new Cloud TPUs perform? Google says each TPU module, as pictured above, can deliver up to 180 teraflops of floating-point performance. That module features 4x Cloud TPU chips (45 teraflops each). These devices are designed to work in larger systems, for example a 64-TPU module pod can apply up to 11.5 petaflops of computation to a single ML (machine learning) training task.
Roughly comparing a Cloud TPU module against the Tesla V100 accelerator, Google wins by providing six times the teraflops FP16 half-precision computation speed, and 50 per cent faster 'Tensor Core' performance. Inference performance of the new Cloud TPU has yet to be shared by Google.
Furthermore, Cloud TPUs "are easy to program via TensorFlow, the most popular open-source machine learning framework," says Google."