Facebook Creating Deepfakes To Train Artificial Intelligence Software To Remove Fake Videos

Facebook is working to combat deepfakes Facebook is training its artificial intelligence to spot and remove heavily doctored videos from the site. Such videos, known as “deepfake” videos, use...

(Photo Credit: Truth Syrup/YouTube; ScreenSlam/YouTube)

Facebook is working to combat deepfakes

Facebook is training its artificial intelligence to spot and remove heavily doctored videos from the site. Such videos, known as “deepfake” videos, use AI to make it appear as if people in the videos–often politicians and other public figures–are saying things they never said. Facebook is spending $10 billion on the project.

The company says deepfake videos are used to create distrust and spread misinformation.

“‘Deepfake’ techniques, which present realistic AI-generated videos of real people doing and saying fictional things, have significant implications for determining the legitimacy of information presented online,” Facebook’s Mike Schroepfer said in a blog post explaining the project. “Yet the industry doesn’t have a great data set or benchmark for detecting them. We want to catalyze more research and development in this area and ensure that there are better open source tools to detect deepfakes.”

Facebook emphasized that they are not using videos posted by Facebook users to train AI. Instead, they are hiring paid actors to create videos, which will help train Facebook’s AI to discern real videos from altered ones.

“It’s important to have data that is freely available for the community to use, with clearly consenting participants, and few restrictions on usage,” Schroepfer wrote. “That’s why Facebook is commissioning a realistic data set that will use paid actors, with the required consent obtained, to contribute to the challenge. No Facebook user data will be used in this data set.”