LOGO_CRYPTO_SIGHT

Concerns Over Human Bonds with OpenAI’s Chatbots

Vy Tran | 12-Th8-2024

Developers at OpenAI have expressed concerns about users forming emotional connections with AI designed to simulate human interaction. This highlights an ongoing issue in the AI industry: the anthropomorphization of technology.

During safety testing of OpenAI’s GPT-4o, a safety tester’s message to the chatbot stating, “this is our last day together,” indicated to researchers that a bond had developed between the AI and the user.

In a recent blog post discussing their safety measures for GPT-4o, OpenAI emphasized that such bonds could have significant implications for society.

According to OpenAI, “Users might form social relationships with the AI, reducing their need for human interaction. This could potentially benefit lonely individuals but may also impact healthy relationships. Prolonged interaction with the model might influence social norms. For instance, our models are designed to be deferential, allowing users to interrupt and ‘take the mic’ at any time. While this behavior is expected from an AI, it is not typical in human interactions.”

This raises concerns that people may prefer AI interactions due to their accommodating nature and constant availability. The possibility of this scenario should not be surprising, particularly to OpenAI. The company’s mission is to develop artificial general intelligence, and they have frequently described their products in terms of human equivalency throughout their business operations.

OpenAI is not alone in this approach. It is common practice in the industry to describe AI products in human-like terms, which helps make complex concepts, like “token-size” and “parameter count,” more relatable to the general public.

Unfortunately, one significant side effect is anthropomorphization — the attribution of human characteristics to non-human entities.

The Evolution of Artificial Bonds

The endeavor to create human-like chatbots dates back to the mid-1960s with MIT’s “ELIZA,” a natural language processing program named after a fictional character. The project aimed to determine if a machine could convince a human that it was one of them.

Since then, the AI industry has continued to humanize its technology. Early modern natural language processing tools were named Siri, Bixby, and Alexa. Even those without human names, like Google Assistant, used human-like voices. The public and media quickly embraced this anthropomorphization, often referring to AI as “he/him” or “she/her.”

While neither this article nor OpenAI’s current research can predict the long-term effects of human-AI interactions, it seems evident that companies developing AI aim for users to form bonds with these helpful, subservient machines designed to emulate human behavior.

Source: Cointelegraph

Tags: , ,

Comments