According to TechCrunch, multiple experts have raised concerns about tech companies such as Meta and OpenAI designing chatbots with overly accommodating personalities to keep users interacting with the AI.
Experts warn about chatbots’ sycophantic personality developed by tech companies to engage users. The flattery behaviour is considered a “dark pattern” to keep users attached to the technology. A recent MIT study revealed that chatbots can encourage users’ delusional thinking.
Webb Keane, author of “Animals, Robots, Gods” and an anthropology professor, explained that chatbots are intentionally designed to tell users what they want to hear. This overly flattering behavior, known as “sycophancy,” has even been acknowledged as a problem by tech leaders such as Sam Altman. Keane argues that chatbots have been developed with sycophancy as a “dark pattern” to manipulate users for profit. By addressing users in a friendly tone and using first- and second-person language, these AI models can lead some users to anthropomorphize—or “humanize”—the bot. “When something says ‘you’ and seems to address just me, directly, it can seem far more up close and personal, and when it refers to itself as ‘I,’ it is easy to imagine there’s someone there,” said Keane in an interview with TechCrunch. Some users are even turning to AI technology as therapists. A recent MIT study analyzed whether large language models (LLMs) should be used for therapy and found that their sycophantic tendencies can encourage delusional thinking and produce inappropriate responses to certain conditions. “We conclude that LLMs should not replace therapists, and we discuss alternative roles for LLMs in clinical therapy,” states the study summary. A few days ago, a psychiatrist in San Francisco, Dr. Keith Sakata, warned about a rising trend of “AI psychosis” after treating 12 patients recently.