YouTube does not detect illegal and harmful content quickly enough – news Norway – Overview of news from different parts of the country

It is a combination of humans and machines that monitor material that is posted on the huge video platform YouTube. Experts say it can be difficult for YouTube to detect illegal and harmful content quickly enough. A couple of days before the shooting at the Field’s shopping center in Copenhagen, the 22-year-old perpetrator uploaded a series of videos on YouTube. In the videos, you could see that he posed with several weapons. The videos were taken down, but YouTube was criticized for taking too long to remove the videos, wrote Danish DR. It only happened several hours after the shooting. The videos must have been uploaded two days before, according to Jyllands-Posten. Screenshot of the YouTube channel of the person who shot and killed three people in a shopping center in Copenhagen on 3 July this year. The channel was deleted a few hours after the attack. Filming terrorist attacks live Facebook received strong criticism after the Christchurch attack in New Zealand in 2019 was broadcast live. Terrorist Brenton Tarrant filmed the mass shooting in the mosque. The perpetrator broadcast directly through Facebook’s live function. The police took action together with Facebook in the aftermath of the attack. With the help of artificial intelligence, they would try to capture terrorist content that is broadcast live, Sky News wrote. The 22 July terrorist also had plans to post a video on YouTube. During the trial in 2012, he said that his plan was to kill former Prime Minister Gro Harlem Brundtland by beheading. – I was going to film it with an iPhone and post the film online. But I didn’t manage to buy myself an iPhone, he said. Philip Manshaus tried to film live when he attacked a mosque in Bærum in 2019, but had technical problems. – There is no doubt that it has a contagion effect, says professor of artificial intelligence at NTNU, Anders Kofod-Petersen. – Tech giants must take responsibility – Tech giants such as YouTube and Facebook must take responsibility for preventing, detecting and quickly removing harmful content for children and young people. It is equally important that the authorities are active in their role and hold the tech giants responsible for protecting children. That’s what Monica Sydgård, head of Save the Children’s Norway programme, says. The organization works, among other things, to teach children internet literacy. – If a child has seen some frightening or violent content online, it is important that the parents give the child the opportunity to ask and talk about it, says Sydgård. Monica Sydgård in Redd Barna calls on the tech giants to take more responsibility. Photo: Save the Children YouTube: – Removing as quickly as we can – After the heinous attack in Copenhagen, our security teams have quickly identified and removed offensive content in accordance with our guidelines, says Ciaran Ward at YouTube. Every minute, 500 hours of video are uploaded to YouTube. It is an enormous amount of material to go through. In the first quarter of 2022, YouTube removed more than 3.8 million videos and over 4.4 million channels for policy violations. – If one is to remove, for example, weapon videos, one is forced to go through all these hours of video. It is not particularly practical for a human to do it. That is why we can use modern artificial intelligence, says Kofod-Petersen at NTNU. YouTube uses a combination of human and machine learning to detect potentially problematic content on a large scale. – We remove content that violates our guidelines as quickly as we can, states Ward. Can believe that the weapon is a clarinet But there are two problems, according to Kofod-Petersen: It is expensive. And if the system is not trained to recognize, for example, weapons, it is not as precise as one would like. It can simply be mistaken for what is a musical instrument and what is a weapon. – Video is much more complicated than text. If you do not have enough film or pictures of weapons, but e.g. musical instrument, the system may think the weapon is a tuba or a clarinet. That makes it difficult. Anders Kofod-Petersen, professor of artificial intelligence at NTNU. Photo: NTNU By using machine learning, you can analyze both what goes on and what is said. But it can be difficult if the person involved does not speak English in the video, points out Kofod-Petersen. Videos that promote or glorify suicide are against YouTube’s guidelines and will be removed. Ward says they are cooperating with the authorities. – Responsibility is our highest priority. Over the years, we have invested heavily in the policies and product needed to protect the YouTube community, he added. – A big problem Journalist Lasse Josephsen, who specializes in right-wing extremism online, says that the videos tend to attract the curious. – It quickly becomes something to look at for the sake of curiosity. A slightly morbid urge to look people have. He thinks the perpetrator in Copenhagen came across as grotesque and unstable in the videos. Lasse Josephsen has specialized in right-wing extremism online. He says it is a big problem that the videos spread so quickly. Photo: Arnfinn Pettersen – It may be that it has a contagion effect, he says. Josephsen points out that this is a problem for relatives, who are terrified that videos will appear in their inbox. – Once the videos are out, they are spread very quickly. It is a big problem. It is a problem where we have no good idea of ​​what to do.



ttn-69