Facial recognition under threat from morphing attacks
As the use of automated face recognition for personal identification continues to grow, so too does it become more vulnerable to so-called ‘morphing attacks’ from cybercriminals. Now, German researchers are developing a system that foils these types of attacks using machine learning methods.
“Criminals are capable of tricking face recognition systems — like the ones used at border control — in such a way that two people can use one and the same passport,” said Lukasz Wandzik, scientist at the Fraunhofer Institute for Production Systems and Design Technology IPK in Berlin. Together with his colleagues at the Fraunhofer Institute for Telecommunications, Heinrich Hertz Institute, HHI and other partners, he is developing a process that identifies the image anomalies that occur during digital image processing in morphing processes.
“In a morphing attack, two facial images are melded into a single synthetic facial image that contains the characteristics of both persons,” Wandzik explained. As a result, biometric face recognition systems authenticate the identity of both persons based on this manipulated photo in the passport. These attacks can take place for example before or during the process of applying for an ID document.
The researchers are focusing on this problem by analysing and researching simulated imaging data. Here they apply modern image processing and machine learning methods, in particular deep neural networks designed explicitly for processing image data. These complex networks consist of a large number of levels which are linked with one another in multilayer structures. They are based on connections between mathematical calculation units and imitate the neural structure of the human brain.
In order to test the processes and systems being developed, the project partners start by generating the data used to train the image processing programs to detect manipulations. Here different faces are morphed into one face.
“Using morphed and real facial images, we’ve trained deep neural networks to decide whether a given facial image is authentic or the product of a morphing algorithm,” explained Professor Peter Eisert, head of the Vision & Imaging Technologies department at Fraunhofer HHI. “The networks can recognise manipulated images based on the changes occurring during manipulation, especially in semantic areas such as facial characteristics or reflections in the eyes.”
The neural networks make very reliable decisions on whether or not an image is genuine, with an accuracy rate of over 90% in the test databases created in the project. “But the real problem is much more that we don’t know how the neural network makes the decision,” Prof Eisert said.
Thus, in addition to the accuracy of the decision, the Fraunhofer HHI researchers are also interested in the basis for the decision. To answer this question they analyse the regions in the facial image which are relevant to the decision using LRP (layer-wise relevance propagation) algorithms they developed themselves. This helps with identifying suspicious areas in a facial image and to identify and classify artefacts created during a morphing process.
Initial reference tests confirm that the algorithms can be used to successfully identify morphed images. The LRP software labels the facial areas relevant to the decision accordingly; the eyes frequently provide evidence of image tampering. The researchers also use this information to design more robust neural networks in order to detect the widest possible variety of attack methods.
The partners have already developed a demonstrator software package including anomaly detection and evaluation procedures. It contains a number of different detector modules from the individual project partners that have been fused together. The interconnected modules apply different detection methods to find manipulations, generating an overall result at the end of the process. The objective is to integrate the software in existing face recognition systems at border checkpoints or to enhance the systems to include morphing components and thus to rule out falsification through corresponding attacks of this type.
“Criminals can resort to more and more sophisticated attack methods; for example, AI methods that generate completely artificial facial images,” Prof Eisert said. “By optimising our neural networks, we’re trying to stay one step ahead of the culprits and to identify future attacks.”
The technique allows companies to non-destructively scan computer chips to ensure they...
A high-throughput computational method developed at UC San Diego has generated 13 new material...
Seeking a solution for comfortable wearable electronics, Chinese researchers have developed a way...