Chat GPT has become worse – news Culture and entertainment

– Chat GPT has gained great power in our lives at the same time that we are powerless against any changes. And the tragicomic part of it all is that we have no idea why the quality has dropped. That’s what Inga Strümke says after reading the new research done by Stanford University. She herself researches artificial intelligence (AI) at NTNU. AI researcher at NTNU in Trondheim Inga Strümke. Photo: Ine Julia Rojahn Schwebs / news The research from Stanford confirms what many have suspected: that the artificial intelligence tool Chat GPT has become worse. Worse in three months The researchers at Stanford must have carried out two identical tests of Chat GPT at two different times. The first was in March, while the second was carried out in June to check whether the result would change in three months. Among other things, the chatbot was tasked with solving mathematical problems, answering inappropriate questions and recognizing visual patterns. Chat GPT became the fastest technology ever to reach 100 million users. Now the tool has become worse. Photo: REUTERS The result showed large differences between March and June, where the most recently carried out evaluation showed a significantly worse result. Short-lived magic Chat GPT took the world by storm when it was launched last November. But the feeling of magic was short-lived. Not long after the launch, the problem came. Here at home, the education sector was forced to rethink and in Hollywood, actors feared for their jobs. Not least, the advance of artificial intelligence breathed new life into the fear that the technology would become so good that it could potentially threaten the existence of humans. The tool is love and hate, but whether we like it or not, Chat GPT has influenced us and become a large part of society. It seems that Strümke can be scary. – When the technology we have become dependent on becomes worse, we notice how vulnerable we are. Open AI: – Notice more Exactly why the tool responded worse in June, the researchers do not know. The company behind the technology, Open AI, keeps its trade secrets close to its chest. They themselves reply that they have not done anything to make the model more stupid. Rather, they believe that users probably notice more when they have become more dependent on it. Peter Welinder is vice president for product and collaboration at Open AI, and responds to complaints against the company. – May have asked stupid questions AI researcher Strümke, for his part, believes that users will get access to a smaller version of Chat GPT. – In terms of computing power, it is easier to have access to a small language model than a large model. And the big ones are naturally better than the small ones. This may have been adopted because, among other things, it uses less machine power and costs less money. The AI ​​researcher believes that another, slightly less likely factor may also have played a role: – We can speculate that people have given so much bad feedback and asked so many stupid questions that it has actually confused the chatbot. So maybe Open AI didn’t do good enough checks before they let the bot train further. The researchers from Stanford are said to have advised Open AI to test the language models more frequently in order to prevent and possibly catch errors.



ttn-69