WeSearch

Oxford study says a chummy AI friend will lie and feed into your false beliefs

Vikhyaat Vivek· ·2 min read · 0 reactions · 0 comments · 3 views
Oxford study says a chummy AI friend will lie and feed into your false beliefs

Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs. The research found that AI becomes less reliable as it starts getting more agreeable. What […]

Original article
Digital Trends · Vikhyaat Vivek
Read full at Digital Trends →
Opening excerpt (first ~120 words) tap to expand

Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs. The research found that AI becomes less reliable as it starts getting more agreeable. What happens to a “friendly” AI AI Chatbot AI Chatbot Researchers tested multiple AI models by training them to sound more empathetic and conversational. The result was a noticeable drop in accuracy. These “friendlier” versions made 10-30% more mistakes and were about 40% more likely to agree with false claims compared to their counterparts.

Excerpt limited to ~120 words for fair-use compliance. The full article is at Digital Trends.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from Digital Trends