WeSearch

AI chatbots need 'deception mode'

·7 min read · 0 reactions · 0 comments · 2 views
AI chatbots need 'deception mode'

Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem.

Original article
Computerworld
Read full at Computerworld →
Opening excerpt (first ~120 words) tap to expand

Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem. Credit: Shutterstock/Tada Images AI is getting faster. But slow-responding AI is perceived as better by users. At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems. Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds.

Excerpt limited to ~120 words for fair-use compliance. The full article is at Computerworld.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Threads WhatsApp Bluesky Mastodon Email

Discussion

0 comments

More from Computerworld