Tactically, this chip should provide significant cost savings for Google, widely believed to be the largest consumer of Machine Learning chips in the world. Google announced a new ASIC that will accelerate its internal machine learning algorithms, as well as provide a compelling platform for AI practitioners to use the Google Cloud for their research, development and production AI work. The “Cloud TPU” is packaged on a 4-chip module complete with a fabric to interconnect these powerful processors, allowing very high levels of scaling. GoogleGoogle also announced the TensorFlow Research Cloud, a 1,000-TPU (4,000 Cloud TPU Chip) supercomputer delivering 180 PetaFlops (one thousand trillion, or one quadrillion, presumably 16-bit FLOPS) of compute power, available free to qualified research teams. While this is similar but significantly larger in concept to the Saturn V Supercomputer from NVIDIA, the Google Supercomputer is designed to support only Google’s own open-source TensorFlow Machine Learning framework and ecosystem, while Saturn V is available for all types of software.
Source: Forbes May 22, 2017 18:56 UTC