We can no more spot the difference between a genuine and a phony face created by AI, according to recent research. According to recent research, people aren’t able to tell the difference between a legitimate human face and one made using Artificial Intelligence.
With the introduction of digital “Deep Fakes,” the notion of faking one’s own identity has been elevated to a new level. Virtual people are now being created using machine learning and AI technologies, and the innovation is growing better with each passing day.
“We should be concerned because these synthetic faces are incredibly effective for nefarious purposes, for things like revenge porn or fraud, for example,” says Sophie Nightingale at Lancaster University in the UK.
The Lancaster University Study
People can no longer tell a computer-generated visage from a genuine human one, according to researchers from Lancaster University. The team argues this is an urgent threat that has to be handled by putting measures in place to keep the populace safe, and protected. AI-generated writing, audio, picture, and media have been exploited for a variety of fraudulent and propagandistic reasons, according to the researchers.
Scientists used StyleGAN2, an Nvidia researcher-developed generative adversarial model, to construct the fake faces. More than 500 people participated in the research, and they were tasked with determining how much faith they could place in computer-generated faces vs the genuine human image. Photos created by Artificial Intelligence are not only lifelike but nearly identical to that of a real human.
The University of California, Berkeley researchers Hany Farid, and Nightingale questioned 315 volunteers, recruited via a crowdsourcing platform if they could discern a sample of 400 bogus pictures versus 400 images of actual persons. In each batch, there were 100 white, African American, East, and South Asian participants.
Have You Read: 5 Reasons Children Must Learn To Code In 2022
This batch’s accuracy score was 48.2 percent, which is just below the level of random variation. There was also another batch of 219 people who were trained to detect AI-Generated portraits of people. According to Nightingale, this subgroup peaked at a 59 percent accuracy score, although the variation is insignificant.
“When the tech first appeared in 2014, it was bad — it looked like the Sims.” “It’s a reminder of how quickly the technology can evolve. Detection will only get harder over time.”
Face-recognition technologies, like AI systems, really aren’t flawless. Due to biases in training data, a few of these algorithms are less effective at distinguishing persons of race than others. Two Black persons were mistaken for apes by an initial image-detection algorithm created by Google back in 2015, probably since the algorithm had been given a large number of images pertaining to chimpanzees than of humans with darker skin tone.