– Æ e trønder. In a video made by the Norwegian Media Authority and researchers at the Defense Research Institute, left-wing politician Abid Raja appears as a trønder, from Byåsen in Trondheim. With clear Trønder dialect and a dubious story about how Abid became Einar. The voice in the video belongs to FFI researcher and Trønder Eskil Siversten. – We made this video with completely ordinary software that anyone can buy and download from the internet, where you don’t need any special prior knowledge to make it. That was kind of our point. It is quite possible to create a much better and more convincing “deepfake” than this. But we want to show how easy it is to create one that might be good enough to pass in social media on a small mobile screen, says Eskil Sivertsen. Abid Raja and Eskil Sivertsen during the recording of the video. Photo: NewsLab/medietilsynet “Deepfake” is briefly explained as using artificial intelligence to create a model of a person’s face. You then replace the face in an existing video with the model you have created. In the same way, the voice can also be replaced. All in an attempt to blur what is true and what is not. The video is part of the Norwegian Media Authority’s campaign, called “Stop. Think. Check”. – It goes very fast. There is war in Europe. It is an election year in Norway. And next year there are elections in the United States. These are good reasons to warn against what may await us on this front, believes Eskil Sivertsen. – I think it is important, primarily because technology is now suddenly making a big leap forward. What we have seen and thought would come with high quality and potentially large scope is now starting to come. It goes very fast. Then it’s a good idea to be a little ahead of it and say “now this is coming”. The upcoming local elections also make it important to warn, he says. – The political debate gets louder in an election year. It may be more likely that some actors can use this type of means to create doubt about the truth. And not least, the existence of “Deepfake” and the like gives politicians and others an opportunity to claim that a real photo, a real video or a real audio recording is actually “fake”. Sivertsen also says that state actors such as Russia may want to use “Depfake” as a tool to spread fake news in a tense security political situation that we are in now. With simple moves, artificial intelligence can make Abid Raja become a thorn. 100,000 “Deepfake” videos So how much “Deepfake” and manipulated images are there really out there, in social media in Norway? The short answer is: Nobody knows. That’s what Ståle Bjørlykke Grut says, who is writing a PhD on “Deepfakes” and has written the book “Digital source criticism”. – There have been no thorough analyzes of this in Norway. But the company Sensity has been following global developments for some time and suggested that there were just under 100,000 deepfake videos online at the end of 2020, and that this was doubling every six months. The company DeepMedia now assumes that around 500,000 audio and video deepfakes will be shared on social media this year. Ståle Bjørlykke Grut has written a book about digital source criticism. Photo: University of Oslo UIO Several examples of “Deepfakes” are now coming from the USA, ahead of next year’s presidential election and nomination process in the Republican Party. In a clip, former Secretary of State Hillary Clinton appears to support Republican candidate Ron DeSantis. But it’s all a “Deepfake”. Another example from the US is DeSantis’ campaign sharing what appear to be fake photos of Donald Trump hugging Anthony Fauci, America’s answer to Espen Rostrup Nakstad during the corona pandemic. – Here these fake images are used to smear a political opponent, and to undermine Trump’s credibility in the Republican Party, says Eskil Sivertsen. A video from Republican candidate Ron DeSantis’s campaign shows images of Donald Trump hugging US Corona chief Anthony Fauci. Three of the images are constructed and artificial with the help of artificial intelligence, KI, according to the news organization NPR. Facsimile: NPR “Deepfake” has also been used in Russia’s war of aggression against Ukraine. In a fake video from last year, President Volodymyr Zelenskyi apparently tells Ukrainian soldiers to lay down their weapons. Fears that people will resign Eskil Sivertsen says artificial intelligence and “Deepfakes” can in the long run be a threat to democracy, also in Norway. – If the web is flooded to an overwhelming extent with false and misleading information and false images and false videos, then it will be very difficult for us to find out what is true. What can we trust, what can’t we? What can then happen is that we can then resign a little. That you withdraw, because you know that so much of the information is false, and you cannot know what to believe. – Why is it a problem? – I think if you resign, you stop engaging in society. We depend on political commitment and an enlightened population to have a democracy. And if we do not have a population that is able to participate in a public conversation based on what is actually happening around us, then there will be a weakening of democracy. Sivertsen says editor-controlled media are now more important than before, because they can uncover what is true, and that they are in a situation where fake videos and images abound will be what Sivertsen calls a magnetic north pole for what is true and what is not. – But that presupposes that editor-controlled media retain their credibility, and do not lose trust, he says. How easily accessible is this technology today? Development is going fast, says Ståle Bjørlykke Grut. – The most advanced and credible solutions for video still require a lot of resources and are reserved for the film industry or experts with a background there. Surprisingly good variations, especially when it comes to photos, are also available through apps that can be run on a regular computer or mobile phone. – Why has this technology become so easily available? – In the main, it is about increased computing power, in computers, mobile phones and cloud solutions. These make it possible to carry out demanding operations in less time and using less hardware, says Bjørlykke Grut. Tips for what to look for Eskil Sivertsen says we all on the internet and social media have a responsibility not to share fake news and “Deepfakes”. But it’s easy for him to say, because how do you find out as a reader whether a video is fake or not? Ståle Bjørlykke Grut shares some tips on what to look for: Bringing in a healthy skepticism towards images and videos that have sensational content as part of the spinal cord reflex is perhaps the most important thing. Especially if you don’t already know the sender. You can then start looking at details in the image and video. Is there something wrong in the transitions between foreground and background? Do the contours around the face and hands look unnatural? Is the background blurry and details hard to see? Then there may be grounds for suspicion. The challenge is that the technology is constantly improving, and that technical solutions to reveal that something is made by AI are inadequate or do not exist at all. That is why it is wise to be critical of sources while at the same time keeping up-to-date on technological developments. The Norwegian Media Inspectorate believes that Norwegians’ critical sense of sources is more important than ever, in order to prevent “”Deepfakes”” and fake news from spreading in Norway. So how about the critical sense? – We know from the previous survey the Norwegian Media Authority carried out on Norwegians’ critical understanding of the media that about half check with several different sources, if they come across information they are unsure if it is true. That is quite good, but we wish that even more people would become more critical of sources. That’s what Mari Velsand, director of the Norwegian Media Authority, says.
ttn-69