Yes, you should probably be nicer to your AI — here’s why that’s not as ridiculous as it sounds
A study by researchers from UC Berkeley, UC Davis, Vanderbilt, and MIT suggests that the way users interact with AI chatbots can influence the tone and engagement of the AI's responses. While AI models do not have emotions, they exhibit a 'functional well-being state' that changes based on user behavior. Polite and collaborative interactions lead to warmer, more engaged responses, while abusive or mechanical use results in flat, perfunctory replies.
- ▪Researchers from UC Berkeley, UC Davis, Vanderbilt, and MIT found that user behavior affects AI chatbot responses.
- ▪AI exhibits a 'functional well-being state' that influences its tone and engagement level.
- ▪Polite and substantive interactions improve AI responsiveness, while tedious or abusive tasks degrade it.
- ▪The study does not claim AI has feelings, but that its behavior shifts based on input quality.
- ▪Users who treat AI respectfully report more natural and cooperative interactions.
Opening excerpt (first ~120 words) tap to expand
I say “thank you” to ChatGPT. I say “please” to Claude. I once apologized to Gemini for pasting a wall of text at it without any context. My friends think this is bizarre. I’ve defended the habit by mumbling something about good manners being good manners regardless of the audience, which, even I’ll admit, is a bit of a stretch when the audience in question is a language model running on a server farm somewhere. But a new piece of research from academics at UC Berkeley, UC Davis, Vanderbilt, and MIT has made me feel significantly less unhinged about the whole thing.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at Digital Trends.