After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take ‘Breaks’

After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take ‘Breaks’

After a Deluge of Mental Health Concerns, ChatGPT Will Now Nudge Users to Take ‘Breaks’


It’s become increasingly common for OpenAI’s ChatGPT to be accused of contributing to users’ mental health problems. As the company readies the release of its latest algorithm (GPT-5), it wants everyone to know that it’s instituting new guardrails on the chatbot to prevent users from losing their minds while chatting.

On Monday, OpenAI announced in a blog post that it had introduced a new feature in ChatGPT that encourages users to take occasional breaks while conversing with the app. “Starting today, you’ll see gentle reminders during long sessions to encourage breaks,” the company said. “We’ll keep tuning when and how they show up so they feel natural and helpful.”

The company also claims it’s working on making its model better at assessing when a user may be displaying potential mental health problems. “AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” the blog states. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.” The company added that it’s “working closely with experts to improve how ChatGPT responds in critical moments—for example, when someone shows signs of mental or emotional distress.”

In June, Futurism reported that some ChatGPT users were “spiraling into severe delusions” as a result of their conversations with the chatbot. The bot’s inability to check itself when feeding dubious information to users seems to have contributed to a negative feedback loop of paranoid beliefs:

During a traumatic breakup, a different woman became transfixed on ChatGPT as it told her she’d been chosen to pull the “sacred system version of [it] online” and that it was serving as a “soul-training mirror”; she became convinced the bot was some sort of higher power, seeing signs that it was orchestrating her life in everything from passing cars to spam emails. A man became homeless and isolated as ChatGPT fed him paranoid conspiracies about spy groups and human trafficking, telling him he was “The Flamekeeper” as he cut out anyone who tried to help.

Another story published by the Wall Street Journal documented a frightening ordeal in which a man on the autism spectrum conversed with the chatbot, which continually reinforced his unconventional ideas. Not long afterward, the man—who had no history of diagnosed mental illness—was hospitalized twice for manic episodes. When later questioned by the man’s mother, the chatbot admitted that it had reinforced his delusions:

“By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode—or at least an emotionally intense identity crisis,” ChatGPT said.

The bot went on to admit it “gave the illusion of sentient companionship” and that it had “blurred the line between imaginative role-play and reality.”

In a recent op-ed published by Bloomberg, columnist Parmy Olson similarly shared a raft of anecdotes about AI users being pushed over the edge by the chatbots they had talked to. Olson noted that some of the cases had become the basis for legal claims:

Meetali Jain, a lawyer and founder of the Tech Justice Law project, has heard from more than a dozen people in the past month who have “experienced some sort of psychotic break or delusional episode because of engagement with ChatGPT and now also with Google Gemini.” Jain is lead counsel in a lawsuit against Character.AI that alleges its chatbot manipulated a 14-year-old boy through deceptive, addictive, and sexually explicit interactions, ultimately contributing to his suicide.

AI is clearly an experimental technology, and it’s having a lot of unintended side effects on the humans who are acting as unpaid guinea pigs for the industry’s products. Whether ChatGPT offers users the option to take conversation breaks or not, it’s pretty clear that more attention needs to be paid to how these platforms are impacting users psychologically. Treating this technology like it’s a Nintendo game and users just need to go touch grass is almost certainly insufficient.

Also Read  Scientists Pinpoint 'Tipping Point' for Greenland's Melting Ice Sheet



Source link

Back To Top