The content below is taken from the original (Google has a new chip that makes machine learning way faster), to continue reading please visit the site. Remember to respect the Author & Copyright.
Google has taken a big leap forward with the speed of its machine learning systems by creating its own custom chip that it’s been using for over a year.
The company was rumored to have been designing its own chip, based partly on job ads it posted in recent years. But until today it had kept the effort largely under wraps.
It calls the chip a Tensor Processing Unit, or TPU, named after the TensorFlow software it uses for its machine learning programs. In a blog post, Google engineer Norm Jouppi refers to it as an accelerator chip, which means it speeds up a specific task.
At its I/O conference Wednesday, CEO Sundar Pichai said the TPU provides an order of magnitude better performance per watt than existing chips for machine learning tasks. It’s not going to replace CPUs and GPUs but it can speed up machine learning processes without consuming a lot more more energy.
As machine learning becomes more widely used in all types of applications, from voice recognition to language translation and and data analytics, having a chip that speeds those workloads is essential to maintaining the pace of advancements.
And as Moore’s Law slows down, reducing the gains from each new generation of processor, using accelerators for key tasks becomes even more important. Google says its TPU provides the equivalent gains to moving Moore’s Law forward by three generations, or about seven years.
The TPU is in production use across Google’s cloud, including powering the RankBrain search result sorting system and Google’s voice recognition services. When developers pay to use the Google Voice Recognition Service, they’re using its TPUs.
Urs Hölzle, Google’s senior vice president for technical infrastructure, said during a press conference at I/O that the TPU can augment machine learning processes but that there are still functions that require CPUs and GPUs.
Google started developing the TPU about two years ago, he said.
Right now, Google has thousands of the chips in use. They’re able to fit in the same slots used for hard drives in Google’s data center racks, which means the company can easily deploy more of them if it needs to.
Right now, though, Hölzle says that they don’t need to have a TPU in every rack just yet.
If there’s one thing that Google likely won’t do, it’s sell TPUs as standalone hardware. Asked about that possibility, Google enterprise chief Diane Greene said that the company isn’t planning to sell them for other companies to use.
Part of that has to do with the way application development is heading — developers are building more and more applications in the cloud only, and don’t want to worry about managing hardware configurations, maintenance and updates.
Another possible reason is that Google simply doesn’t want to give its rivals access to the chips, which it likely spent a lot of time and money developing.
We don’t yet know what exactly the TPU is best used for. Analyst Patrick Moorhead said he expects the chip will be used for inferencing, a part of machine learning operations that doesn’t require as much flexibility.
Right now, that’s all Google is saying. We still don’t know which chip manufacturer is building the silicon for Google. Holzle said that the company will reveal more about the chip in a paper to be released this fall.