Every month our team of Instructional Designers meets for “Talking Teaching” – sessions where we share and discuss interesting articles, methods, and pedagogies. This month, we discussed how question order and algorithms affect students’ ability to recall information.
In this blog, I’d like to discuss how machine-guided learning can help students learn by improving their concept retention. The article is published in npj Science of Learning. This is a completely open access journal dedicated to all facets of learning (from neuroscience to pedagogy) that forms part of Nature’s portfolio.
The article we’d like to share with you this month is:
- Large-scale randomized experiments reveals that machine learning-based instruction helps people memorize more effectively. The authors had over 50 000 learners use an app to prepare them for the written portion of the German driving exam. These students were put into one of three groups to test if an algorithm could help students memorize and retain information. The algorithm appeared to be successful, as students in this group forgot information to a much lower degree than in the other two groups.
What is machine learning-based instruction?
Machine learning-based instruction uses an algorithm to present information and/or questions to students. The algorithm will adjust the frequency and category of presented information based on previous interactions with the user. If a student is answering questions related to a particular concept incorrectly, the algorithm might increase the frequency of questions related to this concept, to give the student more practice. An advanced student may encounter more difficult questions sooner than their peers, since the algorithm detects that they are ready to move past the basics. This type of instruction can improve engagement and information retention, because it is tailored to the individual. It’s likely you will have seen this technique in commonly-used learning apps, like Duolingo.
How did the authors use machine learning?
Upadhyay et al. developed an app designed to prepare students for the written portion of the German driver’s test. Students were sorted into one of three categories: select, difficulty, or random.
Students in the “select” group had their questions selected by an algorithm developed by the authors, which was aimed at increasing their information retention. Students in the “difficulty” group had their questions sorted by difficulty; they were presented with easier questions first, then harder questions.
Finally, students in the “random” group were presented with questions at random (with replacement). This study appears to be novel, in that it attempted to verify an algorithm worked and used a randomized control trial to collect data.
Did the automated questions help students retain information?
Yes. The authors analyzed the “forgetting rate” and the time between reviews (i.e. the rate at which students forgot information and how much time passed between review sessions). This means, somewhat counterintuitively, that a lower score is better (a lower score means a student forgot information less, which implies they remembered information better).
A multiple regression analysis showed that the normalized forgetting rate of the select group was much lower than those of the difficulty or random groups, even when a week passed between review sessions. In addition, learners in the select group were 50.6% more likely to open the app within 4–7 days than users of the random group, which implies that not only did the app allow students to retain information to a higher degree, it also increased their engagement with the app (and therefore the material).
While the algorithm helped students retain information, machine learning is not an all-purpose solution. It’s a specific tool that is best used in conjunction with other methods to provide a great student experience. It’s important to consider that this app only changed the frequency and order that students saw questions. There are a few things that the algorithm did not do that are worth noting:
- It did not assess comprehension – in other words, did the student understand the concept, or did they figure out the correct answer and simply pair the correct answer with the question?
- Any higher-level questions that deal with applying concepts (specifically any answers that cannot be marked automatically) are difficult to incorporate into machine learning and were not present in this algorithm.
- Student motivation was not increased solely by the app’s design. In this study, students were preparing for their driving exam so they were all motivated to study and pass. Students within a university environment may not be as motivated by exam results.
What does this mean for me?
The code and data are available on GitHub if you would like to access them.
If you’re not up for coding your own software, then hopefully this paper has sparked an interest in machine learning-based software. This method has potential - engagement is increasingly important as blended and remote teaching become normalized. Machine learning-based methods may be a good way for students to interact with learning material while you focus on keeping your course running smoothly.
Join the discussion
Have you tried to incorporate machine-enhanced learning in your teaching? Do you have qualms about equity when it comes to algorithms? Join us on our community pages and let us and other educators know your thoughts!
Our online community is a place to engage in discussions on important topics like these and many others related to life-science teaching practices and pedagogy. The community connects and reflects a passionate group of life-science educators from all over the world. Our hope is that the community provides a way for educators to work together, get feedback from each other, and build stronger, more meaningful relationships in real-time, whether these are across campus or across the globe.