Tech gauges how brain learns faces
![Tech gauges how brain learns faces](https://gtechbooster.com/media/2019/12/tech-gauges-how-brain-learns-faces.png)
Real-world, unconstrained images like these (a) are used to train facial recognition networks. Testing for the study was done on highly controlled laser-scan data varying by viewpoint (b, columns), illumination (b, rows) and caricature-like identity strength (c). Credit: University of Texas at Dallas.
Facial recognition technology has advanced swiftly in the last five years. As University of Texas at Dallas researchers try to determine how computers have gotten as good as people at the task, they are also shedding light on how the human brain sorts information.
UT Dallas scientists have analyzed the performance of the latest echelon of facial recognition algorithms, revealing the surprising way these programsâwhich are based on machine learningâwork. Their study, published online Nov. 12 in Nature Machine Intelligence, shows that these sophisticated computer programsâcalled deep convolutional neural networks (DCNNs)âfigured out how to identify faces differently than the researchers expected.
âFor the last 30 years, people have presumed that computer-based visual systems get rid of all the image-specific informationâangle, lighting, expression and so on,â said Dr. Alice OâToole, senior author of the study and the Aage and Margareta Møller Professor in the School of Behavioral and Brain Sciences. âInstead, the algorithms keep that information while making the identity more important, which is a fundamentally new way of thinking about the problem.â
In machine learning, computers analyze large amounts of data in order to learn to recognize patterns, with the goal of being able to make decisions with minimal human input. OâToole said the progress made by machine learning for facial recognition since 2014 has âchanged everything by quantum leaps.â
âThings that were never doable before, that have impeded computer vision technology for 30 years, became not only doable, but pretty easy,â OâToole said. âThe catch is that nobody understood how it works.â
Previous-generation algorithms were effective in recognizing faces that had only minor changes from the image they already knew. Current technology, however, knows an identity well enough to overcome changes in expression, viewpoint or appearance, such as removing glasses.
âThese new algorithms operate more like you and me,â OâToole said. âThatâs in part because they have accumulated a massive amount of experience with variations in how one identity can appear. But thatâs not the whole picture.â
OâToole
OâTooleâs team set about learning how the learning algorithms operateâboth to substantiate the trust put into their results and, as lead author Matthew Hill explained, to shed light on how the visual cortex of the human brain performs the same task.
âThe structure of this type of neural network was originally inspired by how the brain processes visual information,â said Hill, a cognition and neuroscience doctoral student. âBecause it excels at solving the same problems that the brain does, it can give insight into how the brain solves the problem.â
The origins of the type of neural network algorithm that the team studied dates back to 1980, but the power of neural networks grew exponentially more than 30 years later.
âEarly this decade, two things happened: The internet gave this program millions of images and identities to work withâunbelievable amounts of easily available dataâand computing power grew, so that, instead of having two or three layers of âneuronsâ in the neural network, you can have more than 100 layers, as this system now does,â OâToole said.
Despite the algorithmâs intended purpose, the scale of its calculationsâwhich number at least in the tens of millionsâmeans scientists are unable to understand everything that it does.
âEven though the algorithm was designed to model neuron behavior in the brain, we canât keep track of everything done between input and output,â said Connor Parde, an author of the paper and a cognition and neuroscience doctoral student. âSo we have to focus our research on the output.â
To demonstrate the algorithmâs capabilities, the team used caricatures, extreme versions of an identity, which Y. Ivette ColĂłn BSâ17, a research assistant and another author of the study, described as âthe most âyouâ version of you.â
âCaricatures exaggerate your unique identity relative to everyone elseâs,â OâToole said. âIn a way, thatâs exactly what the algorithm wants to do: highlight what makes you different from everyone else.â
To the surprise of the researchers, the DCNN actually excelled at connecting caricatures to their corresponding identities.
âGiven these distorted images with features out of proportion, the network understands that these are the same features that make an identity distinctive and correctly connects the caricature to the identity,â OâToole said. âIt sees that distinctive identity in ways that none of us anticipated.â
So, as computer systems begin to equalâand, on occasion, surpassâthe facial recognition performance of humans, could the algorithmâs basis for sorting information resemble what the human brain does?
To find out, a better understanding is needed of the human visual cortex. The most detailed information available is via images obtained via functional MRI, which can be used to image the activity of the brain while a subject is performing a mental task. Hill described fMRI as âtoo noisyâ to see the small details.
âThe resolution of an fMRI is nowhere near what you need to see whatâs happening with the activity of individual neurons,â Hill said. âWith these networks, you have every computation. That allows us to ask: Could identities be organized this way in our minds?â
OâTooleâs lab will tackle that question next, thanks to a recent grant of more than $1.5 million across four years from the National Eye Institute of the National Institutes of Health.
âThe NIH has tasked us with the biological question: How relevant are these results for human visual perception?â she said. âWe have four years of funding to find an answer.â
More Information
- Converging solutions: Artificial networks shed light on human face recognition
- Matthew Q. Hill et al. Deep convolutional neural networks in the face of caricature, Nature Machine Intelligence (2019). DOI: 10.1038/s42256-019-0111-7
Last updated on November 30th, 2022