The Imbalanced User-AI Relationships as an Ethical Failure of Front-End Design in Healthcare AI
Ethical discourse on AI in healthcare has focused predominantly on back-end concerns such as bias, fairness and explainability, while the front-end interface, where patients and clinicians actually encounter AI outputs, remains under explored. This paper identifies imbalanced user-AI relationships as a distinct class of front-end ethical failure: patients are rendered highly visible to AI systems through data inference, yet cannot understand, question or influence how they are represented. Through the concept of asymmetric legibility and a chat-based telemedicine case, we show how design choices e.g., default recommendations, restricted inputs and suppressed uncertainty, undermine agency, clinician judgment and human oversight even where systems are technically accurate. We propose reciprocity as a design orientation and offer interventions for more balanced, participatory user-AI relationships in healthcare.
Opening excerpt (first ~120 words) tap to expand
Computer Science > Human-Computer Interaction arXiv:2604.22767 (cs) [Submitted on 24 Mar 2026] Title:The Imbalanced User-AI Relationships as an Ethical Failure of Front-End Design in Healthcare AI Authors:Maureen Mghambi Mwadime View a PDF of the paper titled The Imbalanced User-AI Relationships as an Ethical Failure of Front-End Design in Healthcare AI, by Maureen Mghambi Mwadime View PDF Abstract:Ethical discourse on AI in healthcare has focused predominantly on back-end concerns such as bias, fairness and explainability, while the front-end interface, where patients and clinicians actually encounter AI outputs, remains under explored.
…
Excerpt limited to ~120 words for fair-use compliance. The full article is at arXiv cs.AI.