For the millions of people that can’t resist taking the odd selfie here and there, the ultimate technology is nowhere and is in the form of the 3D selfie. The technology was developed by computer scientists over at the University of Nottingham and Kingston University and is capable of creating a 3D facial reconstruction from a single 2D image.

Using their new app, in just a few seconds, users can watch with amazement as their single color image is transformed into a 3D model of the face right in front of their very eyes. More than 400,000 people have tested the app so far, and more are queuing up to give it a go. You too can try it out by simply taking a selfie and uploading it to the website.

The project was led by Ph.D. student Aaron Jackson alongside fellow Ph.D. student Adrian Bulat, who are both based at the Computer Vision Laboratory in the School of Computer Science. It was done in collaboration with Dr. Vasileios Argyriou from the School of Computer Science and Mathematics at Kingston University. There’s still a long way to go before this technique is anywhere near perfect, but this breakthrough is certainly a big step in the right direction.

Dr. Yorgos Tzimiropoulos supervised the research and trained a Convolutional Neural Network (CNN) on a massive dataset of 2D pictures and 3D facial images. Using this data, the CNN then reconstructs the 3D facial geometry to form just one single 2D image. “The main novelty is in the simplicity of our approach which bypasses the complex pipelines typically used by other techniques. We instead came up with the idea of training a big neural network on 80,000 faces to directly learn to output the 3D facial geometry from a single 2D image,” said Dr. Tzimiropoulos.

This is no little problem. Existing systems face many challenges and still require multiple facial images in which to work. “Our CNN uses just a single 2D facial image, and works for arbitrary facial poses (e.g., front or profile images) and facial expressions (e.g., smiling).” Bulat confirmed, “The method can be used to reconstruct the whole facial 3D geometry including the non-visible parts of the face.”

As well as being used in standard applications such as face or emotion recognition, this type of technology could also be used to improve people’s online shopping experiences by integrating augmented reality into it by allowing customers to virtually try on their goods before they purchase them. Or, it could be used to personalize computer games or simulate plastic surgery results to understand conditions such as depression and autism better.

More News to Read