New deepfake algorithm allows you to edit the voice of the announcer in the video

Every deepfake should have a good reason, at least at first. Here is a new development of Adobe Research, in collaboration with scientists from Stanford and the Max Planck Institute, is aimed at minimizing the time and video shooting through the use of deepfake. With its help, you can put any text into the mouth of a person on a recorded video as if he actually voiced it, and not record dozens of unsuccessful takes in real filming.

For the neural network to work, you need at least 40 minutes of the original video with a transcript of what the speaker is voicing. The program studies his facial expressions, compares fragments of text with muscle movements and builds a three-dimensional model of the "talking head". The only thing left is to compose a sequence of gestures for the new text, generate the necessary textures and apply them to the model.

This system uses Neural Rendering machine learning technology, which is responsible for photorealistic rendering. To add sound to a video, you need an additional module, for example, the VoCo service, which works in a similar way. The appearance of this deepfake did not become a sensation and does not even come as a surprise against the background of other achievements in this area. Another thing is more interesting: the creators of the deepfake themselves admit that they are preparing a weapon for an information war, and it will shoot very soon - in the US presidential elections in 2020.

More precisely, the developers are afraid of such a development of the situation and therefore want to get ahead of it, to introduce the world to the deepfake for the perfect sabotage of news releases and various "data leaks". So that the world supposedly had time to get used to it and tried to develop immunity to it. Or at least to spur work on the creation of methods for detecting deepfakes, to develop new ways to investigate and determine where is true and where is false. Alas, the very essence of generative adversarial neural networks is precisely to create more and more realistic fakes over and over again. And in this, machines have long surpassed us humans.