Rachel Coyne on 19 Jul 2017

Machine Learning recreates image that monkeys’ brains use 205 neurons to process

By measuring the brain activity of monkeys, scientists have established that it takes just 205 single brain cells to process an image – much simpler than originally thought. This finding opens up a new world of face recognition algorithms used in machine learning and artificial intelligence ripe for commercial application. Your imagination can go wild thinking of everyday applications, but consider its use in forensics where the face of a criminal could be reconstructed by analysing a witness’s brain activity.

The following article was originally published on LinkedIn by Munish Mehta. Munish is a Data Scientist and Advanced Analytics Practice Lead at Satalsyt.

For a long time, scientists have been researching how a brain stores and recognises images. Before now, researchers in this area had identified regions of the brain associated with visual processing but were unable to demonstrate how. The key to solving this puzzle was understanding brain cells called neurons. A neuron is a specialised cell transmitting nerve impulses and there are millions of interconnected neurons in a brain. Scientists hypothesised that each neuron could store and recognise an image, but could not provide a working model of this hypothesis or debunk it. It was believed the brain code for processing images was highly complex. But it turns out it’s much simpler and a machine learning algorithm can decode images from neural responses recorded directly from the brains of monkeys to replicate an image they’ve been shown.

Last month, a team of biomedical researchers at the California Institute of Technology (USA) debunked the old hypothesis and proposed a new one. The team’s effort confirmed some previous findings and found a simple model to solve this puzzle. Below are some of the key insights:-

  1. The region in brain called IT (inferotemporal cortex) responds to images.
  2. A neuron (single brain cell) in this region does not recognise or store the image.
  3. Within this region, cells in sub-region called MF (Middle Fundus) respond in majority to the shape of the object. It does so by an electrical impulse, which can be measured.
  4. A second sub-region called AM (Anterior Medial) responds in majority to the appearance of the object. This set of cells in AM provide complementary information to cells in MF.

The researchers devised 50 measurable factors from a set of 2000 images. They divided these into two halves. 25 factors measured attributes related to shape of an image (oval, round face, hair thickness, etc.), and the other 25 factors measured attributes related to appearance of the image (hair colour, smiling face, etc.).

In an experimental setting using monkeys, scientists figured out which set of cells respond to these 50 factors. By measuring and quantifying the electrical activity of these brain cells (neurons), they could accurately reconstruct images from brain activity in a test scenario, proving their hypothesis.

Altogether, only 205 neurons (brain cells) were sufficient to recreate an image. Monkeys were shown a random image and brain activity was recorded. The algorithm processed the brain signals and reconstructed the image accurately from just 205 neurons. The research results show that this code is actually very simple and easily replicated by artificial intelligence applications such as machine learning.

To understand how an image was constructed from brain activity, let us start with a simple example. For simplicity, let’s assume we are measuring:-

  1. A single attribute of shape.
  2. A single attribute of appearance.

The cells in the MF sub-region measure the shape of an object. In this example, we measure the shape of an ellipse (how oblong). A signal from the MF cells tells us how oblong is the shape of an ellipse. The stronger the signal is from MF cells, the circle (which is a special case of an ellipse) becomes flatter. The following diagram is a representation of the concept.

A machine learning algorithm processed the monkeys' brain signals and reconstructed the image accurately.

 

 

 

 

 

 

 

The cells in the AM sub-region measure the appearance of the object. A signal from AM cells tells us about the colour of an image (shades of grey).

A machine learning algorithm processed the monkeys' brain signals and reconstructed the image accurately.

 

 

 

 

 

 

 

By measuring the signal along the ‘shape axis’ and the ‘appearance axis’, the object could be reconstructed and determined.

 

A machine learning algorithm processed the monkeys' brain signals and reconstructed the image accurately.

 

 

 

 

 

 

 

 

 

 

 

 

We can extrapolate the above concept from 2 attributes to 50 attributes. By measuring the signal along 50 axes, the algorithm reconstructed a complex image from brain cells.

25 attributes/feature from MF cells were measured for shape information. The other 25 attributes from AM cells were measured for appearance information.

A machine learning algorithm processed the monkeys' brain signals and reconstructed the image accurately.

 

 

 

 

 

 

 

 

 

 

 

 

This work opens up a new world of face recognition machine learning algorithms. It could also spur further activity in speech processing models. There is a range of applications in the commercial world that will kick start from this work.

 

 

Categories:
Tags: