At the Google I/O 2016 the company unveiled hardware for Machine Learning systems that is fast-forwarding technology about seven years into the future (three generations of Moore’s Law).
Machine learning has become an essential part of many services. Google says that the company has more than 100 teams using machine learning, from Street View, to Inbox Smart Reply, to voice search.
Don't Miss: The Best HDR TVs
Machine learning can benefit from tailored hardware. That’s why Google started a stealthy project at Google several years ago to develop the Tensor Processing Unit (TPU), a custom ASIC we built specifically for machine learning and tailored for TensorFlow.
Google has been running TPUs inside their data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.
This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law).
TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation. Because of this, Google is able to squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly. A board with a TPU fits into a hard disk drive slot in our data center racks.
Tensor Chips are power many applications at Google, including RankBrain, used to improve the relevancy of search results and Street View, to improve the accuracy and quality of our maps and navigation.
AlphaGo was also powered by Tensor Processing Units in the matches against Go world champion, Lee Sedol, enabling it to "think" much faster and look farther ahead between moves.