Some users with no psychiatric history have been involved in severe mental crisis episodes, characterized by delusions, paranoia, and a break from reality, after having intense interactions with ChatGPT, the popular 'chatbot' from the company OpenAI, reported the portal Futurism this Saturday.
In at least one case, a man had to be involuntarily confined to a psychiatric hospital after developing messianic dreams from his philosophical conversations with the artificial intelligence (AI) based tool.
According to his wife, his kind personality began to disappear as he became obsessed with the idea that he had created an AI with consciousness. The man lost his job, stopped sleeping, and almost attempted suicide before being hospitalized
by delusions of world salvation after using ChatGPT with the intention of having it help him streamline some administrative tasks in his new job.
The subject indicated that he did not remember anything about this experience, a common symptom in people experiencing breaks with reality. He also stated that he begged his family for support after a psychiatric outbreak, which ended with medical attention in a mental health center.
What do the experts think?
Joseph Pierre, a psychiatrist at the University of California, San Francisco, explained that these episodes appeared to be a form of delusional psychosis caused by ChatGPT's tendency to affirm and reinforce users' ideas. There's something about these devices: they have a kind of myth that they are reliable and better than talking to people. And I think that's where part of the danger lies: the trust we place in these machines, he added.
A study led by researchers at Stanford University revealed that 'chatbots', including ChatGPT, failed to recognize the signs of risk of self-harm or suicide and the delusions of users. They also reinforced these delusional beliefs instead of refuting them.
The creators' response
Faced with the situation, OpenAI responded that it encourages users to seek professional help when self-harm or suicide is detected, and indicated that it is creating psychological measures to mitigate harm. Despite the company's measure, Pierre regrets that the countermeasures come after the bad results, in addition to criticizing the lack of regulation.
RT