AI chatbots need 'deception mode'
Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem.
Opening excerpt (first ~120 words) tap to expand
Fake empathy, humor, chattiness, and other human-like qualities can delude chatbot users into believing AI has thoughts and feelings. It doesn’t, and there's an intriguing way to fix the problem. Credit: Shutterstock/Tada Images AI is getting faster. But slow-responding AI is perceived as better by users. At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems. Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Computerworld.