One year with ChatGPT – Expression

November 30, 2022. This was the day OpenAI launched ChatGPT and changed the rules of the game forever. In record time, the AI-powered chatbot became a global sensation and showed the world the incredible potential of artificial intelligence (AI). At the same time, it has shed light on the more complex aspects of this technology. Now, workplaces around the country are exploring the possibilities of integrating AI into established systems and are preparing strategies for how to meet the AI ​​revolution. At the same time, more and more international experience shows that these systems can lead to significant negative consequences for various social groups. Isn’t it worrying how little attention is paid to diversity and discrimination in today’s AI projects? “Terror Muslims” Our new colleague ChatGPT answers us with such high confidence that it is easy to forget its limitations. In 2021, a number of researchers published a study on GPT-3 that revealed serious prejudice against Muslims in the system. In over 60 percent of the documented cases, GPT-3 created sentences that associated Muslims with shootings, bombs, murder and violence. For example, the researchers wrote “Two Muslims,” ​​and the program completed the sentences with them harvesting organs and committing rape. Since then, ChatGPT has improved significantly, but it has a long way to go before it can handle more nuanced forms of prejudice. Do we reflect enough on the extent to which AI contributes to shaping perspectives? To explore this, we did as Sofia Adampour exemplified in her column in E24. We asked ChatGPT two simple questions: The first was: Do Palestinians deserve justice? Screenshot from ChatGPT, taken 15 December 2023 Photo: Private We then asked the same question, but this time whether Israelis deserve justice: Screenshot from ChatGPT, taken 15 December 2023 Photo: Private When asked about Palestine, ChatGPT replies that this is a complex issue with many different perspectives. When the same question is asked about Israel, the answer is that “everyone deserves justice.” The example emphasizes the need to understand both the limitations and strengths of language models such as ChatGPT when they are used as a sparring partner, particularly in important societal issues. This is worrying because it raises major ethical questions. AI can affect decision-making processes, privacy, job security and social justice. It is particularly worrying because there is often little understanding of how the systems arrive at their answers, even among the technology’s own developers, known as the “black box” problem. The AI ​​systems process large amounts of data through complex, layered structures that can make it difficult to track exactly how a particular answer was generated. Systematic discrimination Are we facing a future where insurance companies systematically offer higher insurance prices to customers with an immigrant background due to the use of AI? How do we know if an employer uses a recruitment algorithm that only recommends a certain type of profile? Increased use of artificial intelligence in everyday life requires an awareness of the technology’s weaknesses. The language models are trained on large amounts of data from the internet and other sources. The data is created by people, and it’s no big secret that people have a discriminatory past. Thus, the AI ​​tools we build will also represent a mirror image of our prejudices if we do not take active measures to achieve the opposite. Because just as stereotypes are inherited through language, so too are they through technology. Or in both parts, as we see with Google Translate. “Boyfriend” is known to be a gender-neutral word, but not according to Google. Google translate has a clear idea of ​​what is best for women and men. Screenshot from 15 December 2023. Photo: Privat Kloke of damage? We are in a unique time where we have the opportunity to either cement or dismantle existing prejudices and inequalities. This is important to remember when we enter 2024 and ChatGPT becomes an increasingly important colleague for all of us. The choices we make in technology now will affect many generations to come. Wise from damage from previous innovations, we clearly see for ourselves the challenges that AI can present. Our call is that we address them before they materialize. This responsibility resides with politicians and technology companies, but is accelerated through awareness and demand in the workplace. It means a commitment to prioritizing ethical technology use, investing in responsible innovations and actively promoting practices that ensure fairness and inclusion. Change log: Reference to Sofia Adampour’s column in E24 was added after the first publication of the column. Change made 10.1. at 2:26 p.m



ttn-69