System on a chip is the new motherboard, says Google. Here’s what that actually means

In a blog post, Google’s vice president of systems infrastructure Amin Vahdat put it in simple terms: “The [Systems on Chip] is the new motherboard,” he wrote. 

The cloud computing giant, always in need of more computing power for its servers, has until now relied on the motherboard as an integration point, where CPUs, networking, storage devices, custom accelerators and memory all come together; but with compute at Google now at an inflection point, argued Vahdat, a next big step is needed.

“To gain higher performance and to use less power, our workloads demand even deeper integration into the underlying hardware,” Vahdat said.

On an SoC, the latency and bandwidth between different components can be orders of magnitude better, with reduced power and cost compared to composing individual ASICs on a motherboard, Google argues. “Just like on a motherboard, individual functional units (such as CPUs, TPUs, video transcoding, encryption, compression, remote communication, secure data summarization, and more) come from different sources. We buy where it makes sense, build it ourselves where we have to, and aim to build ecosystems that benefit the entire industry,” Vahdat said. 

Vahdat announced that Intel veteran Uri Frank will be joining Google’s ranks as vice president of engineering to work on server chip design, further suggesting that the company is ready to ramp up its efforts in the race for ever-more efficient chips. 

“Google has designed and built some of the world’s largest and most efficient computing systems,” said Frank. “For a long time, custom chips have been an important part of this strategy.”

Google has been designing its own chips for many years now, producing hardware adapted to specific use cases in an effort to meet compute needs more efficiently than it could with general-purpose chips sold by companies like AMD or Nvidia. 

With the advent of cloud computing, there has been heightened demand for processing power in Google’s data centers, which is why the company’s in-house engineers have build custom-made chips adapted to specific needs. For example, in 2015 Google introduced the Tensor Processing Unit (TPU), which runs in the company’s data centers to improve the performance of machine-learning applications delivering services such as real-time voice search, photo object recognition and interactive language translation. 

Custom chips have been pitched as an effective way to debunk Moore’s law, by building efficient and targeted pieces of hardware capable of coping with exponential increases in demand for compute power. 

“Instead of integrating components on a motherboard where they are separated by inches of wires, we are turning to Systems-on-Chip designs where multiple functions sit on the same chip, or on multiple chips inside one package,” said Vahdat. 

With a deeper integration into the underlying hardware, higher performance will come at the cost of less power, said Vahdat, which will in turn drive new efficiencies and scale. SoCs will also allow for even more customization, with the ability to specialize for individual applications; although the challenge will be to do so rapidly enough despite the variety of different services that are currently available in the cloud. 

With more of its competitors already turning to custom-made SoCs, it is only natural that Google would focus its efforts on developing its own technology. Apple pioneered the design of in-house SoCs as early as 2010, when the Cupertino giant switched to ARM-based in-house SoCs for iPhones; the technology is now the basis of many products incuding iPads and Apple Watches.  

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :