San Francisco, June 19 (IANS) Facebook has collaborated with researchers at the Michigan State University (MSU) to develop a method of detecting and attributing deepfakes.
It relies on reverse engineering, working back from a single AI-generated image to the generative model used to produce it.
Image attribution can identify a deepfake’s generative model if it was one of a limited number of generative models seen during training.
But the vast majority of deepfakes — an infinite number — will have been created by models not seen during training.
“During image attribution, those deepfakes are flagged as having been produced by unknown models, and nothing more is known about where they came from, or how they were produced,” said Facebook.
The company said that with the new method, researchers will now be able to obtain more information about the model used to produce particular deepfakes.
“Our method will be especially useful in real-world settings where the only information deepfake detectors have at their disposal is often the deepfake itself,” Facebook said.
To combat the spread of disinformation, Microsoft also last year unveiled a new tool that will spot deepfakes or synthetic media which are photos, videos or audio files manipulated by Artificial Intelligence (AI) which are very hard to identify if false or not.