Date
Publisher
arXiv
Lived experiences fundamentally shape how individuals interact with AI
systems, influencing perceptions of safety, trust, and usability. While prior
research has focused on developing techniques to emulate human preferences, and
proposed taxonomies to categorize risks (such as psychological harms and
algorithmic biases), these efforts have provided limited systematic
understanding of lived human experiences or actionable strategies for embedding
them meaningfully into the AI development lifecycle. This work proposes a
framework for meaningfully integrating lived experience into the design and
evaluation of AI systems. We synthesize interdisciplinary literature across
lived experience philosophy, human-centered design, and human-AI interaction,
arguing that centering lived experience can lead to models that more accurately
reflect the retrospective, emotional, and contextual dimensions of human
cognition. Drawing from a wide body of work across psychology, education,
healthcare, and social policy, we present a targeted taxonomy of lived
experiences with specific applicability to AI systems. To ground our framework,
we examine three application domains (i) education, (ii) healthcare, and (iii)
cultural alignment, illustrating how lived experience informs user goals,
system expectations, and ethical considerations in each context. We further
incorporate insights from AI system operators and human-AI partnerships to
highlight challenges in responsibility allocation, mental model calibration,
and long-term system adaptation. We conclude with actionable recommendations
for developing experience-centered AI systems that are not only technically
robust but also empathetic, context-aware, and aligned with human realities.
This work offers a foundation for future research that bridges technical
development with the lived experiences of those impacted by AI systems.
What is the application?
Study design
