Date
Publisher
arXiv
We present a study that translates the Force Concept Inventory (FCI) using
OpenAI GPT-4o and assess the specific difficulties of translating a
scientific-focused topic using Large Language Models (LLMs). The FCI is a
physics exam meant to evaluate outcomes of a student cohort before and after
instruction in Newtonian physics. We examine the problem-solving ability of the
LLM in both the translated document and the translation back into English,
detailing the language-dependent issues that complicate the translation. While
ChatGPT performs remarkably well on answering the questions in both the
translated language as well as the back-translation into English, problems
arise with language-specific nuances and formatting. Pitfalls include words or
phrases that lack one-to-one matching terms in another language, especially
discipline-specific scientific terms, or outright mistranslations. Depending on
the context, these translations can result in a critical change in the physical
meaning of the problem. Additionally, issues with question numbering and
lettering are found in some languages. The issues around the translations of
numbering and lettering provide insight into the abilities of the LLM and
suggest that it is not simply relying upon FCI questions that may have been
part of the LLM training data to provide answers. These findings underscore
that while LLMs can accelerate multilingual access to educational tools,
careful review is still needed to ensure fidelity and clarity in translated
assessments. LLMs provide a new opportunity to expand educational tools and
assessments. At the same time, there are unique challenges using LLMs to
facilitate translations that this case study examines in detail.
What is the application?
Who is the user?
Who age?
Why use AI?
Study design
