People have been battling internet addiction for decades, but a new technological obsession might be taking over. According to recent studies, some ChatGPT users are already showing the early signs of addiction, including withdrawals, mood swings, and temporary loss of control.
Thankfully, the majority of current ChatGPT users engage with the platform in a non-emotional way, and research teams with MIT and OpenAI want to keep it that way.
What OpenAI and MIT researched in their studies
While the recent examination was conducted by teams with both MIT and OpenAI, they split their efforts into two separate studies.
Conducted by OpenAI, one of the top AI companies, the first study analyzed nearly 40 million interactions between users and ChatGPT. It also leveraged surveys to solicit the opinions of actual users.
The second study was carried out by a team at the MIT Media Lab. It was a randomized controlled trial (RCT) that involved almost 1,000 participants over a four-week period. This study specifically focused on how different features of ChatGPT such as voice mode and conversation type affect the user’s mood and emotional well-being.
Six takeaways from this ChatGPT research
Unsurprisingly, the two studies produced critical insights into the relationships between large language models (LLMs) and current users. The results can be distilled into six key points.
- Those who view ChatGPT as their friend are more susceptible to the negative impacts of AI.
- While brief usage of ChatGPT’s voice mode often results in improved well-being, this tends to diminish with prolonged usage.
- Emotional interactions with ChatGPT are incredibly rare, even among heavy AI users.
- Personal conversations are often associated with increased levels of loneliness, but they typically carry less emotional dependence. In contrast, non-personal conversations are often associated with heightened emotional dependence.
- Most users do not engage with ChatGPT in an emotional way.
- Researchers emphasize that a multi-method approach — combining interaction and analysis with randomized trials — is necessary to fully understand AI’s social and emotional effects.
“Our findings show that both model and user behaviors can influence social and emotional outcomes. Effects of AI vary based on how people choose to use the model and their personal circumstances,” OpenAI and MIT researchers stated in a recent blog post about ChatGPT usage and well-being.
Minimizing the harmful effects of AI
MIT and OpenAI hope these studies will motivate other researchers to study human-AI interactions. One way that OpenAI plans to minimize potential harms is to update “…our Model Spec to provide greater transparency on ChatGPT’s intended behaviors, capabilities, and limitations.”