Connect the Dots: Neuromorphic Computing Spikes toward Smarter IoT

Create: 10/17/2017 - 18:56

The announcement of Loihi, the first self-learning chip, kick starts a sorting through of what’s real, unreal and downright surreal in the world of computing. For solution providers looking for an edge in the rapidly changing landscape of the IoT, getting a handle on these new post-Von Neumann architectures could mean the difference between new business, and no business.

It’s not as though it’s going to happen right away, but it’s necessary to adhere to Andy Grove’s mantra that only the paranoid survive. With that in mind, let’s look at what Intel is trying to achieve with Loihi.

As indicated in How To Manage All The Dots and Data, the massive amounts of data being generated by IoT need to be handled and analyzed productively, efficiently and cost effectively, or else all is for naught. However, classic computing architectures based on the Von Neumann architecture are not very good at analyzing terabytes of data and making useful decisions.

That’s why companies such as Nvidia have turned their graphics computing units (GPUs) to artificial intelligence and now deep learning applications, and have partnered with other purveyors of disparate processing architectures. The end game is to find the right mix of processing ingredients to efficiently process the IoT data and get useful decisions out of it, efficiently.

The problem is that nothing comes close to being as efficient as the human brain, which – according to those who know how to estimate these kinds of things – can do with 20 watts what current supercomputers would need megawatts to do. Something has to change in order to make use of the IoT’s highly dynamic and unstructured data.

Researchers have turned inward to analyze how the brain works and possibly emulate it. Almost 30 years ago, CalTech professor Carver Mead introduced the concept of neuromorphic computing to do just that. He initially described it as a very large scale integration (VLSI) system with analog circuits to mimic neuro-biological architectures in the nervous system. More recently, neuromorphic has come to mean a mix of analog and digital mixed-mode VLSI chips and software that implement models of neural systems.

Regardless of how it’s described, the point is that neuromorphic engineering tries to understand how the morphology of individual neurons, circuits, applications and overall architectures creates desirable computations. It basically tries to copy how the brain works and how data is represented, with particular attention to learning, plasticity and evolutionary change.

This was all well and good as theory, until Intel recently announced Loihi.

Loihi Baby Steps Toward AI

Loihi is the codename for a first-of-its-kind, self-learning, neuromorphic chip that mimics how the brain functions by learning to operate based on various modes of feedback from the environment (Figure 1).
 

Figure 1. Intel’s Loihi is the first neuromorphic chip: a chip that can learn by itself and make its own inferences decisions. (Image source: Intel Corp.)

It uses data to learn and make inferences, gets smarter over time and doesn’t need to be “trained” in the traditional sense. Instead, it relies on the concept of asynchronous spiking. Intel summarizes the concept succinctly as:

The brain’s neural networks relay information with pulses or spikes, modulate the synaptic strengths or weight of the interconnections based on timing of these spikes, and store these changes locally at the interconnections. Intelligent behaviors emerge from the cooperative and competitive interactions between multiple regions within the brain’s neural networks and its environment.

Much of the concern around AI has been the definition of rules and behaviors and training systems to respond to predetermined scenarios. Defining these rules opens up two Pandora’s Boxes. The first is time. It can take forever to optimize for hundreds or millions of permutations using typical machine-learning approaches as they don’t generalize well. The second is who gets to decide the rules of behavior and the “most appropriate” responses.

However, according to Dr. Michael Mayberry, corporate vice president and managing director of Intel Labs, Loihi is a step toward realizing the full benefits of machine-based systems that can quickly make complex decisions without direct, a priori, experiential knowledge. An example Intel gives is recognizing and flagging an abnormal heartbeat after an extended period of self-learning “normal” patterns (Figure 2). In security applications, for example it could identify a breach by being able to distinguish normal from the abnormal.

Figure 2. Monitoring a heartbeat for abnormalities without having to train a system is one of the Holy Grails’ of smart IoT solutions. (Image source: Intel)

In the first half of 2018, the Loihi test chip will be shared with universities and research institutes that are focused on advancing the state of the art in AI.

In the meantime, IoT solutions providers need to deep-think long and hard about how to adapt to new computing architectures and programming methods in order to stay relevant in an era of wide and accelerating architectural changes.

About Author

Patrick Mannion
Patrick Mannion is an independent writer and content consultant who has been working in, studying, and writing about engineering and technology for over 25 years.

Latest Videos

more