I n a recent safety review, OpenAI has raised concerns about the potential for users to become overly reliant on ChatGPT’s new human-sounding voice mode for companionship. This feature, which began rolling out to paid users last week, has generated excitement for its lifelike responses and real-time interaction capabilities. It can adjust to interruptions, mimic human noises like laughing and “hmms,” and even assess a speaker’s emotional state based on their tone of voice.
However, OpenAI is wary that this advanced voice mode could lead to “dependence” on the AI, mirroring themes from the 2013 film Her, where the protagonist falls in love with an AI only to face heartbreak when the AI reveals it has relationships with many other users. The company’s report highlights instances where users have expressed sentiments of “shared bonds” with ChatGPT’s voice mode, raising alarms about potential social and emotional implications.
The report underscores the broader risks associated with rapid advancements in AI technology. As tech companies race to deploy tools that could revolutionize our interactions and daily lives, they often do so without fully understanding the long-term consequences. OpenAI’s concern is that the realistic human-like interaction provided by ChatGPT might lead users to trust the AI more than they should, potentially impacting their real-life social interactions and relationships.
Liesel Sharabi, a Professor at Arizona State University who studies technology and human communication, has expressed concerns about people forming deep emotional connections with evolving technologies. “It’s a lot of responsibility on companies to navigate this in an ethical and responsible way,” Sharabi told CNN. “I do worry about people forming really deep connections with technology that might not exist in the long run and that is constantly evolving.”
The report also notes that interactions with ChatGPT’s voice mode could influence what’s considered normal in human social interactions. OpenAI acknowledges that their models, which allow users to interrupt and control the conversation, might present behaviors that are unconventional in human interactions.
OpenAI remains committed to ensuring the safe development of AI technology and plans to continue monitoring the potential for users to develop emotional dependencies on its tools. The company emphasizes the importance of understanding these implications as AI continues to integrate more deeply into our lives.

0 Comments