Date
Publisher
arXiv
This paper presents the concept of AI-supported Mini-Labs, combining
smartphone-based experiments with multimodal large language models (MLLMs).
Smartphones, with their integrated sensors and computational power, function as
versatile mobile laboratories for physics education. While they enable the
collection of rich experimental data, the analysis of complex everyday
phenomena has often been limited in the classroom. Advances in MLLMs now allow
learners to process multimodal data, text, images, audio, and video, and
receive support in experiment design, data analysis, and scientific
interpretation. Three case studies highlight the approach: determining a
vehicle drag coefficient from accelerometer data, measuring the ionospheric
reflection height from lightning-generated signals analyzed as audio
spectrograms, and real-time spectroscopy of blood volume dynamics using
smartphone video. The results show clear advantages over conventional methods,
including time savings, high-quality visualizations, and individualized
guidance. Beyond simplifying data analysis, AI-augmented pocket labs foster
representational competence, critical thinking, and 21st-century skills. This
hybrid approach offers a promising pathway for individualized and inquiry-based
science education, though further studies are needed to assess long-term
learning effects and potential risks.
What is the application?
Who age?
Why use AI?
Study design
