Brains “in silico” — neuromorphic computing

31 May 2020
Brain drawing

As the need for social physical isolation and distancing in this difficult time keeps us at homes for most of the time, there is an additional opportunity to dive into some new topics or finally get some updates on the topics you enjoy. That was totally my case – I wasn’t reading too much about recent advances in neuroscience and technology, as I am strictly focused on my Master’s theses right now.

A futuristic graphic of a man, brain and neurons.
What light through yonder window breaks? ~ W. Shakespeare [0]
I always wanted to use such graphics. Dear Friends from Neuroscience, please enjoy. <3
Source: Sabine Zierer from Pixabay

However, some time ago I visited Geekweek [1] and got caught by this title – “Intel created a neuromorphic computer corresponding to the brain of a small mammal” [my translation; the original title and article are in Polish]. And I was like… whaaat?

What is neuromorphic computing?

As I started digging, I found out that neuromorphic computing has grown at the intersection of neuroscience, computer science and electrical engineering [2]. In the past decades, each of these fields experienced a rapid development, thus opening the possibilities for simulating neurons in silico. It all started with the work of Dr Carver Mead in 1980s [3, 4]. Where this fantastic journey may take us in the future?

Neuromorphic computing derives its inspiration straight from the research on neurophysiology and systems neuroscience [4] and attempts to further mimic the processes or pathways occurring at the level of neuron or a group of neurons. Let’s see an example – action potential – which all of you probably remember from middle / high school. An increasing voltage, reaching the threshold and… spike! Then back to the resting potential. What is more difficult to recall from high school memories is how this happens on a level of a nerve cell membrane – that there are several types of ions involved in the process, flowing through the membrane “back and forth”. The process was described mathematically and explained by Alan Hodgkin and Andrew Huxley [5], who together received a Nobel Prize in Physiology and Medicine in 1963. One-third of the Prize was awarded to John Eccles for his contribution to the knowledge about synapses (e.g. postsynaptic potentials) [6]. Said so, while having a mathematical model of a set of states of a nerve cell transmitting the signal, it can be further explored in terms of possibility to embed it in an electrical circuit.

A visualization of action potential in neurons.
A simple way to visualize action potential. You can see that the membrane is polarized first and then it subsequently changes its polarization along the axon.
Author: Laurentaylorj – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=26311114

But there is much more of neurophysiological knowledge, which may be incorporated by neuromorphic computing. Various types of nerve cells, hierarchical arrangement of cells present in the cortex, absolutely complicated types of connections excitating or inhibiting other cells, neuroplasticity (i.e. reorganization of connections).
Neuromorphic computing aims to implement the elements of that knowledge in silico on different levels of abstraction from the biological phenomena, depending on the destination of a model.

As Furber [3] notices, reproduction of all the details from the biological perspective does not happen in neuromorphic computing, and the neuron is intentionally being simplified. One of the goals is to achieve efficient computation and energy consumption, bringing us closer to the efficiency observed in the brains, and to study if such neuromorphic architectures are able to perform tasks, which are easy for animals, but difficult for standard computers. In the article of Bai & Yi [4], authors wrote that the architectures developed for neuromorphic computing are more efficient than classic computers. Now engineers and researchers attempt to provide large-scale solutions, which are about to involve millions of neurons [3].

[For those, who would like to read more, I’d highly recommend Furber’s article – see [3] in References. Author describes the birth of neuromorphic computing, as well as provides a detailed description of several examples of currently developed architectures.]

Big news from Intel

Intel has built a brand new neuromorphic research system, which – as they claim – has a “computational capacity of 100 million neurons” and the power consumption is less than 500 W [7]. There are three names that have to be recognized in Intel architecture: Loihi, Nahuku and Pohoiki Springs.

The recent advancement announced by Intel Corporation is related to the last name, Pohoiki Springs, which is a neuromorphic research system. It consists of 24 Nahuku boards with each having 32 Loihi chips, which in total gives us a number of 768 chips inside [7]. Where are those 100 million neurons then?

The whole power is hidden in the Loihi chips, which creation – as researchers from Intel report – was inspired by the biological processes going on in human brains [8]. Authors mention several levels of “brain inspiration”, for example inclusion of hierarchical connectivity between layers of neurons, adaptation of certain neural parameters – delays of information transmission or synaptic weights (mirroring “the relevance” of the connection). Two billion transistors are built in the chip, covering a total area of 60 mm^2 [8]. Up to date, it is the most complex of Intel’s neuromorphic chips.

Drawing of a brain.
Image by holdentrils from Pixabay

Loihi was demonstrated to recognize gestures in real time, as well as some initial steps of “visual attention” orienting [7]. What is absolutely fascinating is the result of Intel’s experiments in which Loihi was trained to be an “electric nose” with an ability to recognize several odors [9] . Fas-ci-na-ting! Let us consider what possibilities that actually may provide in the future. Maybe a future Mars rover with a fast and adaptable ability to check various smells right away? Or devices able to detect dangerous odors, which are hardly recognizable by us?
To sum up in a single sentence, Intel’s recent advancement is of great potential in various possible applications. If you’re interested in a more detailed reading, please visit Intel’s Newsroom articles dedicated to Neuromorphic Research Community [10, 11].

Sci-Fi open questions

So, how do you think, if we join together not 100 million, but ~86 billion [12, 13] (the approx. number of neurons in human brain) of such “neurons”, are we about to get a… cognition? Mind? The “great power of humanity”? Now, Pohoiki Springs possesses 100 million neurons – similarly to a hamster or a mole rat [14]. But does it possess a cognitive capability of those animals? Well, this issue is much more difficult – we already know that it’s OK for a computer to solve an abstract, numerical problems, but it faces many difficulties to recognize objects or behave in an environment (e.g. avoiding obstacles), which on the other hand is pretty basic for animals.

How does the brain scale? Can we scale up such electronic system in a similar way to how our brains were scaled during the evolution? Can we receive the same results?

References

[0] W. Shakespeare. (1595). Romeo and Juliet, Act II, scene ii. Source: https://en.wikiquote.org/wiki/William_Shakespeare
[1] Geekweek’s article: https://www.geekweek.pl/news/2020-03-24/intel-stworzyl-neuromorficzny-komputer-odpowiadajacy-mozgowi-malego-ssaka/
[2] Neuromorphic computing / engineering: https://en.wikipedia.org/wiki/Neuromorphic_engineering
[3] Furber, S. (2016). Large-scale neuromorphic computing systems. Journal of neural engineering, 13(5), 051001.
https://iopscience.iop.org/article/10.1088/1741-2560/13/5/051001/meta
[4] Bai, K., & Yi, Y. (2019). Opening the “Black Box” of Silicon Chip Design in Neuromorphic Computing. In Bio-Inspired Technology. IntechOpen.
https://www.intechopen.com/books/bio-inspired-technology/opening-the-black-box-of-silicon-chip-design-in-neuromorphic-computing
[5] Hodgkin-Huxley model: https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model
[6] Sir John Eccles: https://en.wikipedia.org/wiki/John_Eccles_(neurophysiologist)
[7] https://newsroom.intel.com/news/intel-scales-neuromorphic-research-system-100-million-neurons/#gs.1knyix
[8] Davies, M., Srinivasa, N., Lin, T. H., Chinya, G., Cao, Y., Choday, S. H., … & Liao, Y. Loihi: a Neuromorphic Manycore Processorwith On-Chip Learning.
The preprint I had read and I refer to was found on Author’s ResearchGate: https://www.researchgate.net/publication/322548911_Loihi_A_Neuromorphic_Manycore_Processor_with_On-Chip_Learning.
[9] Electric nose: https://newsroom.intel.com/news/how-computer-chip-smell-without-nose/#gs.7n3g2h
[10] Intel’s presentation of neuromorphic computing: https://www.intel.pl/content/www/pl/pl/research/neuromorphic-computing.html
[11] Neuromorphic Research Community: https://newsroom.intel.com/editorials/intel-creates-neuromorphic-research-community/#gs.7n3cfn
[12] Azevedo, F. A., Carvalho, L. R., Grinberg, L. T., Farfel, J. M., Ferretti, R. E., Leite, R. E., … & Herculano‐Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled‐up primate brain. Journal of Comparative Neurology, 513(5), 532-541.
[13] Herculano-Houzel, S. (2009). The human brain in numbers: a linearly scaled-up primate brain. Frontiers in human neuroscience, 3, 31.
[14] Neurons in numbers: https://en.wikipedia.org/wiki/List_of_animals_by_number_of_neurons

You Might Also Like