Date
Publisher
arXiv
In recent years, natural language processing (NLP) has become integral to
educational data mining, particularly in the analysis of student-generated
language products. For research and assessment purposes, so-called embedding
models are typically employed to generate numeric representations of text that
capture its semantic content for use in subsequent quantitative analyses. Yet
when it comes to science-related language, symbolic expressions such as
equations and formulas introduce challenges that current embedding models
struggle to address. Existing research studies and practical applications often
either overlook these challenges or remove symbolic expressions altogether,
potentially leading to biased research findings and diminished performance of
practical applications. This study therefore explores how contemporary
embedding models differ in their capability to process and interpret
science-related symbolic expressions. To this end, various embedding models are
evaluated using physics-specific symbolic expressions drawn from authentic
student responses, with performance assessed via two approaches: 1)
similarity-based analyses and 2) integration into a machine learning pipeline.
Our findings reveal significant differences in model performance, with OpenAI's
GPT-text-embedding-3-large outperforming all other examined models, though its
advantage over other models was moderate rather than decisive. Overall, this
study underscores the importance for educational data mining researchers and
practitioners of carefully selecting NLP embedding models when working with
science-related language products that include symbolic expressions. The code
and (partial) data are available at https://doi.org/10.17605/OSF.IO/6XQVG.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
