AI Produces Human Faces Out of Totally Pixelated Photos
Artificial intelligence networks have learnt a new trick: being able to create photo-realistic faces from just a few pixelated dots, adding in features such as eyelashes and wrinkles that can’t even be found in the original.
Before you freak out, it’s good to note this is not some kind of creepy reverse pixelation that can undo blurring, because the faces the AI comes up with are artificial – they don’t belong to real people. But it’s a cool technological step forward from what such networks have been able to do before.
The PULSE (Photo Upsampling via Latent Space Exploration) system can produce photos with up to 64 times greater resolution than the source images, which is 8 times more detailed than earlier methods.
“Never have super-resolution images been created at this resolution before with this much detail,” says computer scientist Cynthia Rudin, from Duke University.
What PULSE does is work backwards, generating full-resolution photos of faces that would look like the blurred originals when pixelated, rather than starting with the blurred image and trying to add in detail to find a match. A grid of 16 x 16 pixels can be converted into a 1,024 x 1,024 image in seconds, with more than a million pixels added.
The system makes use of a generative adversarial network or GAN, which essentially puts two neural networks (complex AI learning engines designed to mimic the human brain) up against each other, both trained on the same set of photos. One generates faces, and the other decides if the face is realistic enough.
By taking this route, the researchers are able to get images that don’t have the fuzzy or indistinct areas that sometimes appear in the final product when other techniques are used.
Part of the system’s success is down to the way it looks for any image that will downscale to the original, rather than trying to find the one ‘true’ image that would fit the source. It quickly tests a whole host of options – working through the “latent space” in its name – until it finds a match.
GANs such as this one continue to grow in complexity: you may remember that tech giant Nvidia has been showing off a generative adversarial network that’s able to produce creepily realistic-looking pictures of people who don’t actually exist.
In that case, the images are generated by mixing existing faces into something new. In the PULSE system demonstrated by researchers here, the blocks of a pixelated image are used as the source instead.
Multiple faces can be produced from the same source image, and the same idea can be applied to create photos of anything out of a blocky picture, the researchers say – cats, sunsets, trees, balloons or anything else.
This aspect could make it suitable for use in all kinds of other areas, including medicine, microscopy, astronomy and satellite imagery.
You can find more details on the PULSE website, and even try it out on your own pictures.
The research has been presented at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR), and a paper is available on pre-print server arXiv.org.