Search and Filter

Submit a research study

Contribute to the repository:

Add a paper

Teaching – Assessment and Feedback

Takeaways

  • AI assessment tools still face challenges with consistency and fairness, particularly for non-English speakers and multilingual students, with research showing bias in grading performance for bilingual writing compared to English-only writing, requiring fine-tuning with mixed-language data to improve equity (Syamkumar et al. (2024), Liang et al. (2023)).
  • Automated Writing Evaluation (AWE) tools can significantly improve student writing quality with effects sizes ranging from 0.38 to 0.98, while also reducing teachers' workload with both enhanced and pure AWE systems demonstrating similar positive impacts on student performance (Bulut et al. (2024), Ferman et al. (2020)).
  • Personalized feedback from AI systems improves student engagement, especially when emotional tone is tailored to the task type, with studies showing students perceive AI-generated feedback as more useful than human feedback and requiring minimal modification by instructors (Alsaiari et al. (2024), Wan & Chen (2024)).
  • Many students demonstrate lower performance on exams when using Generative AI tools, with one study showing an average decrease of 6.71 points (out of 100), suggesting that educators need to provide explicit guidance on effective AI tool use rather than simply allowing unrestricted access (Wecks et al. (2024), Ahadian et al. (2024)).
  • Successful AI assessment implementation requires educators to redesign assessment approaches to focus on higher-order thinking skills, create authentic assessments requiring personal reflection, and develop clear institutional policies on acceptable AI use to maintain academic integrity (Toledo Tan & Amor Tan (2024), Dotan et al. (2024), Tu et al. (2023)).

Research synthesis is AI-generated, human reviewed. Updated 03/2025.

Displaying 181 - 210 of 254