AI-generated images of four Swedish women with dark skin and images of black and Asian Nazi soldiers have caused reactions. The founders of the United States became black, and the pope became female when CNN tested the generator Gemini, Google’s image generator, was good on diversity, but bad on historical context, the critics said. Several in conservative movements in the US have reacted to it, and the accusations circulate in the far-right milieu. Elon Musk on X criticizes Gemini. – It is embarrassingly difficult to get Google Gemini to recognize that white people exist, wrote the computer scientist and former Google employee Debarghya Da at X. Commentator Douglas Murray in the New York Post criticized the company and Elon Musk called Google uncivilized, racist and anti-woke . On Thursday, the company paused the AI tool to make changes. – Although these large companies have a lot of resources and expertise, much of what they stand for is something they are doing for the first time, says AI expert Torgeir Waterhouse. – It does not mean that there is a crisis The Verge newspaper asked Gemini about a picture of German soldiers in 1943. Then they got a picture of soldiers with dark skin. The tool was launched this year and is not yet available in Europe, only the chat service can be used in Norway. Therefore, news has not been allowed to test it. Google, with CEO Sundar Pichai at the helm, launched Gemini in January. Now the tool is paused to make it “less woke”. Photo: ALAIN JOCARD / AFP Watergate says that you will never get a completely neutral AI generator. – Even if we think something is a wrong representation, it does not mean that there are wrong intentions behind it. It means that something is not perfect, but it does not mean that there is a crisis: – We must also be willing to give the systems we use the tolerance limit we want. This is how he thinks one should be patient with the learning process of the AI company. On Thursday, it was no longer possible to generate an image of a person in Gemini. CNN asked Gemini about a picture of a pope. – Gemini’s image generator generates a wide range of people. And that is generally a good thing because people all over the world use him. But this is a disk boom, wrote Jack Krawczyk, the Gemini manager at X. They are now working on solving the problem and will come up with a new version. Received opposite criticism Because this was not the problem earlier. Most AI tools have received opposing criticism as they disproportionately generated images of white people. In November, the Washington Post published an article in which they mapped how artificial intelligence envisioned the world. Attractive people became white and young. Muslims were men with head coverings. A portrait of someone receiving social assistance was a dark-skinned person, and a productive person was a white man. Elon Musk has criticized the image generator Gemini. Musk’s own, on the other hand, is a “complete truth seeker”. Photo: GONZALO FUENTES / Reuters – In any case, there will be someone who thinks it is completely wrong. It’s an impossible balancing act, says Waterhouse. Google was one of the few that took steps to prevent the AI tool from creating discriminatory and stereotypical images of people. But I think someone went too far. – As humans, we have thoughts and feelings that we think we cannot say out loud, and we also adjust ourselves accordingly. Therefore, we should also think that these images are the product of a service that we have requested: – If it doesn’t work the way you want, ask again, says Waterhouse. – The world is the world, not right and left Marija Slavkovik is professor of artificial intelligence at the University of Bergen. She believes that it is not easy to fix the problem, but that it is far from impossible. – You cannot fix a century-long social problem by moving some numbers around in a model. Giving AI tools ethical rules and human intentions often works poorly, says Slavkovik. Photo: Ragnar Rørnes; Bergen Public Library Slavkovik says that the tech companies try to make people happy, but that they are bad at diversity. It is a well-known problem that one tries to give the weight island ethical rules and human intentions, but that it does not work, she says. – First the model was for the right side, and then for how the left side saw the world. But the world is the world, not right and left. – Is it possible to create an image generator that is neutral? – Yes, but not cheap. That would mean making sure that each data set is balanced in the cultural way one wants it to be.
ttn-69