In Press
Sao Pedro, M., Baker, R., Gobert, J., Montalvo, O., & Nakama, A. (in press). Using Machine-Learned Detectors of Systematic Inquiry Behavior to Predict Gains in Inquiry Skills. Submitted to User Modeling and User-Adapted Interaction.
Abstract. We present work toward automatically assessing and estimating science inquiry skills as middle school students engage in inquiry within a physical science microworld. To do so, we generated machine-learned models that can detect when students test their articulated hypotheses, design controlled experiments, and engage in planning behaviors using two inquiry support tools. Models were trained using labels generated through a new method of manually hand-coding log files, “text replay tagging”. This approach led to detectors that can automatically and accurately identify these inquiry skills under student-level cross-validation. The resulting detectors can be applied at run-time to drive scaffolding intervention. They can also be leveraged to automatically score all practice attempts, rather than hand-classifying them, and build models of latent skill proficiency. As part of this work, we also compared two approaches for doing so, Bayesian Knowledge-Tracing and an averaging approach that assumes static inquiry skill level. These approaches were compared on their efficacy at predicting skill before a student engages in an inquiry activity, predicting performance on a paper-style multiple choice test of inquiry, and predicting performance on a transfer task requiring data collection skills. Overall, we found that both approaches were effective at estimating student skills within the environment. Additionally, the models’ skill estimates were significant predictors of the two types of inquiry transfer tests.
Keywords: scientific inquiry, exploratory learning environment assessment, skill prediction, machine-learned models, microworlds, behavior detection, designing and conducting experiments, Bayesian Knowledge-Tracing