Date
Publisher
arXiv
Selecting a college major is a difficult decision for many incoming freshmen.
Traditional academic advising is often hindered by long wait times,
intimidating environments, and limited personalization. AI Chatbots present an
opportunity to address these challenges. However, AI systems also have the
potential to generate biased responses, prejudices related to race, gender,
socioeconomic status, and disability. These biases risk turning away potential
students and undermining reliability of AI systems. This study aims to develop
a University of Maryland (UMD) A. James Clark School of Engineering
Program-specific AI chatbot. Our research team analyzed and mitigated potential
biases in the responses. Through testing the chatbot on diverse student
queries, the responses are scored on metrics of accuracy, relevance,
personalization, and bias presence. The results demonstrate that with careful
prompt engineering and bias mitigation strategies, AI chatbots can provide
high-quality, unbiased academic advising support, achieving mean scores of 9.76
for accuracy, 9.56 for relevance, and 9.60 for personalization with no
stereotypical biases found in the sample data. However, due to the small sample
size and limited timeframe, our AI model may not fully reflect the nuances of
student queries in engineering academic advising. Regardless, these findings
will inform best practices for building ethical AI systems in higher education,
offering tools to complement traditional advising and address the inequities
faced by many underrepresented and first-generation college students.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
