Date
Publisher
arXiv
Empathy is central to human connection, yet people often struggle to express
it effectively. In blinded evaluations, large language models (LLMs) generate
responses that are often judged more empathic than human-written ones. Yet
when a response is attributed to AI, recipients feel less heard and validated than
when comparable responses are attributed to a human. To probe and address
this gap in empathic communication skill, we built Lend an Ear, an experimental
conversation platform in which participants are asked to offer empathic support
to an LLM role-playing personal and workplace troubles. From 33,938 mes-
sages spanning 2,904 text-based conversations between 968 participants and their
LLM conversational partners, we derive a data-driven taxonomy of idiomatic
empathic expressions in naturalistic dialogue. Based on a pre-registered random-
ized experiment, we present evidence that a brief LLM coaching intervention
offering personalized feedback on how to effectively communicate empathy signif-
icantly boosts alignment of participants’ communication patterns with normative
empathic communication patterns relative to both a control group and a group
that received video-based but non-personalized feedback. Moreover, we find evi-
dence for a silent empathy effect that people feel empathy but systematically
fail to express it. Nonetheless, participants reliably identify responses aligned
with normative empathic communication criteria as more expressive of empa-
thy. Together, these results advance the scientific understanding of how empathy
is expressed and valued and demonstrate a scalable, AI-based intervention for
scaffolding and cultivating it.
What is the application?
Who age?
Why use AI?
Study design

