Get ready for really low-power AI: Synaptics and Eta Compute envision neural nets that will observe every sound, every motion

eta-compute-ecm3532-ai-sensor-board-top.jpg

Eta Compute had already developed its own ASIC chip and system board for low-power applications. Now it will devote its effort to making software tuned to Synaptics’s chips.

Jon Gordon

Smart buildings, smart cities, smart transportation — such applications of the Internet of Things have been part of the lore of technology companies for over a decade now. But what does it really mean for there to be sensors that are constantly measuring the ambient noise of rooms, or watching people move about, day and night?

That kind of constant surveillance may be coming to some built environments as soon as later this year, thanks to the arrival of chips and software that are dramatically more efficient at running algorithms within the tightest of energy constraints. 

Companies that have already established a beachhead in IoT are spreading out to AI-enabled applications at the edge, meaning, computing infrastructure that can find a function to compute real-world data, via neural nets, but with vastly lower power needs, positioned in rooms, in factories, on equipment. 

One such effort comes from chip maker Synaptics, which already sells numerous IoT chips for consumer applications of IoT, such as set-top boxes and speakers. 

In a bid to make inroads into the industrial IoT, Synaptics is this month going into production with the first silicon in a planned family of ultra-low-power chips, called Katana, a name for a type of samurai sword. 

The company last month announced a strategic investment in Eta Compute, a five-year-old startup based near Los Angeles in Westlake Village that is providing the software expertise and support on top of Synaptics’s silicon. Synaptics’s share, undisclosed, is part of a $12.5 million total Series C round of financing for Eta that includes previous investors.

The big problem that Synaptics and Eta are trying to solve is making really, really low-power chips that can support applications written in machine learning frameworks such as Google’s TensorFlow. 

vineet-ganju-synaptics.jpgvineet-ganju-synaptics.jpg

“There’s a big motivation to move to battery-powered cameras with one to two years of life,” says Vineet Ganju, vice president of marketing for edge AI at Synaptics. “We’ve heard from potential customers that to install a wired-powered, plugged-in camera, is in the hundreds of dollars per camera, just in installation cost alone, by the time you open up the ceiling, run wires, shut down parts of the building.”

Synaptics.

“Almost every semiconductor company is an AI company these days,” observed Vineet Ganju, vice president of marketing for edge AI at Synaptics, in a Zoom interview with ZDNet.

“Areas where we are differentiating is in low-power for voice and audio neural network operation, and this hybrid platform that will do both audio and image processing at micro-watts of power.” Competitors’ parts, contended Ganju, tend to handle only image or audio processing, not both.

Synaptics and Eta are helping to fulfill a broad mandate for low-power devices that Google and others have been describing in recent years as a stretch goal for the entire chip field. 

As Google’s guru for embedded AI, Pete Warden, proclaimed during a chip conference in October, AI is going to move to “the edge of the edge,” devices that draw a single milliwatt to operate versus the tens of watts that phones or PCs use. 

Also: Google AI executive sees a world of trillions of devices untethered from human care

Such energy discipline “is really important, because that means you have a device that can run on double-A batteries for a year or two years, or even via energy harvesting from solar or vibration,” Warden said.

Eta Compute has already been working with customers on applications that play to that theme. For example, audio applications such as listening for a device wake word, the kind of thing you’re familiar with from “Hey, Google,” need to be super-efficient to be always listening. 

One part of that application is just monitoring the ambient sound level of a room, what’s called low-power activity detection, “just telling if the sound in a room has changed,” said Ganju. 

That background detection should only consume “tens of micro-watts of power,” meaning, tens of millionths of a watt, said Ganju. The actual processing of a detected keyword might burst up to hundreds of micro-watts, he said. 

An image application in an industrial setting might be a camera that is running on battery. It is designed to perform some very basic uses of object detection while draining just thousandths of a watt of power. One application is a camera that runs on batteries that can sense people entering or leaving a room (similar to a startup profiled last year by ZDNet, Density.)

synaptics-2020-katana-low-power-ai-platform.jpgsynaptics-2020-katana-low-power-ai-platform.jpg

The Katana system-on-chip includes multiple processor cores, among which are the neural network core that runs the actual machine learning code, a digital signal processor to receive and process real-world signals, and a host controller CPU core to run system software.

Synaptics.

An example camera application might take a single frame of video per second, just enough to detect that there is now a person in an enclosed space where there wasn’t a second before. 

The idea is for such devices to run on one set of batteries for years consuming “single-digit milliwatts of power,” said Ganju. 

A lot of existing image processing AI chips, observed Ganju,  are derived from automative applications. Those tend to be relatively power-hungry, he said, because they draw from a car battery. Think of road imagery from on-board cameras and LIDAR. 

Also: People do strange things in doorways: Density watches the office as employees come back

“They’re trying to bring that down to lower levels, and that’s a big leap,” he said of automobile chip makers who try to move to edge applications. “We’ve come at it the other way, starting from a low-power [CPU] core and seeing what kind of image processing we can do on there.”

The more efficient the chip, the more Synaptics and Eta can expand the capability of the chip. For example, with the motion-sensing camera, a goal is to take the current one-frame-per-second feed and expand it to upward of five frames per second, for greater precision.

The need to run off battery power is one of those surprising details of the industrial IoT that doesn’t become apparent until you start thinking through how systems have to be installed. 

“There’s a big motivation to move to battery-powered cameras with one to two years of life,” said Ganju. “We’ve heard from potential customers that to install a wired-powered, plugged-in camera, is in the hundreds of dollars per camera, just in installation cost alone, by the time you open up the ceiling, run wires, shut down parts of the building.”

Eta’s appeal for Synaptics was in part just having discovered this realm of applications Synaptics didn’t know as well.

“We realized there was a hole in the market for someone to provide this kind of resolution at these power levels to these types of segments,” said Ganju. “Eta had done that market research for us, if you will.”

Eta had already developed its own home-grown ASIC chip to run a suite of neural net software it calls TENSAI. The company has some very seasoned chip talent, including chief executive Ted Tewksbury, who has held numerous executive roles in the semiconductor world, including as head of Integrated Device Technology. 

samir-haddad-2020-eta-compute.jpgsamir-haddad-2020-eta-compute.jpg

“It’s not just the MAC [multiply-accumulate operation], it’s memory management and the compiler that maps TensorFlow efficiently onto the processor,” says Samir Haddad, head of marketing at Eta Compute. “You need to provide a complete system that works according to what is expected.”

But the company was facing the challenge of being able to proliferate and diversify its offering, which is not easy for a five-year-old company with about twenty employees and just $31.9 million in financing. 

“With this partnership, we have now a very strong silicon partner that can help us build a complete solution,” said Samir Haddad, who serves as senior director for product marketing, in a Zoom interview with ZDNet.

Eta will going forward focus on making software that is the most efficient for Synaptics’s chips, for which it will be in an exclusive arrangement for some period of time. Eta was one of the first companies to implement Google’s TFMicro implementation of TensorFlow for embedded devices. 

What sets Eta apart from other companies in edge inference, said Haddad, was having a broader approach to the problem, not just speeding up neural nets but thinking through how multiple kinds of silicon might work together as building blocks. 

A neural net is composed of predominantly linear algebra operations, multiplications of matrices. But the total problem to be solved in embedded devices is not just the mathematical operations. It’s also things such as power management, signal processing issues when interfacing with a sensor, and a host of other systems engineering issues. 

“It’s not just the MAC [multiply-accumulate operation], it’s memory management and the compiler that maps TensorFlow efficiently onto the processor,” Haddad explained.

“You need to efficiently map your algorithms to these various processors,” including the neural net functional block but also the embedded digital signal processor and the host CPU that runs system software routines. “That is a very complex problem, and we solve that.”

“You need to provide a complete system that works according to what is expected,” said Haddad.

Eta expects to bring that systems approach to multiple fields, including retailing; development of “smart” buildings and homes; robotics; notebook computers and other consumer devices; and something called Industry 4.0.

The same kind of object detection in the building camera example might be transformed in robotics to let a manufacturing robot “sense” a person is in its immediate vicinity, for safety reasons. A notebook could be made aware its owner is approaching the notebook and wake up as the person comes closer.

Of course, Synaptics is not the only company that is developing ultra-low-power parts. Startups are focusing on highly efficient parts for the edge, such as Ambient Scientific of San Jose, California, which claims to be able to re-train neural nets continuously even in low-power mode.

But Synaptics’s scale should be a meaningful advantage over what any startup can achieve. The existing development and manufacturing routine at Synaptics that already turns out tons of chips makes it more likely the company can deliver on the vision of Katana as a family comprising multiple different parts with varying specifications for diverse applications.

“There will be versions of Katana targeting lower-cost, higher volume applications, depending on the number of interfaces supported,” said Ganju. “And then there will be higher-performance models, for higher-performance cameras, for example.”

Equally meaningful should be Synaptics’s ability to sell all over the world. 

“Even though we are not necessarily focused on this today, we have a worldwide footprint of direct sales team,” Ganju pointed out. “We are able to touch most of these customers in each region, so we can bring that scale to what Eta has been trying to do.”

Products using the Synaptics and Eta technology should start appearing in the marketplace by early next year. 

Access the original article
Subscribe
Don't miss the best news ! Subscribe to our free newsletter :