Show simple item record

dc.contributor.authorPankaj Chejara
dc.contributor.authorLuis P. Prieto
dc.contributor.authorAdolfo Ruiz-Calleja
dc.contributor.authorMaría Jesús Rodríguez-Triana
dc.contributor.authorShashi Kant Shankar
dc.contributor.authorReet Kasepalu
dc.contributor.otherSchool of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
dc.contributor.otherSchool of Educational Sciences, Tallinn University, 10120 Tallinn, Estonia
dc.contributor.otherGSIC-EMIC Group, University of Valladolid, 47011 Valladolid, Spain
dc.contributor.otherSchool of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
dc.contributor.otherSchool of Digital Technologies, Tallinn University, 10120 Tallinn, Estonia
dc.contributor.otherSchool of Educational Sciences, Tallinn University, 10120 Tallinn, Estonia
dc.date.accessioned2025-10-09T05:21:33Z
dc.date.available2025-10-09T05:21:33Z
dc.date.issued01-04-2021
dc.identifier.urihttps://www.mdpi.com/1424-8220/21/8/2863
dc.identifier.urihttp://digilib.fisipol.ugm.ac.id/repo/handle/15717717/40977
dc.description.abstractMultimodal Learning Analytics (MMLA) researchers are progressively employing machine learning (ML) techniques to develop predictive models to improve learning and teaching practices. These predictive models are often evaluated for their generalizability using methods from the ML domain, which do not take into account MMLA’s educational nature. Furthermore, there is a lack of systematization in model evaluation in MMLA, which is also reflected in the heterogeneous reporting of the evaluation results. To overcome these issues, this paper proposes an evaluation framework to assess and report the generalizability of ML models in MMLA (EFAR-MMLA). To illustrate the usefulness of EFAR-MMLA, we present a case study with two datasets, each with audio and log data collected from a classroom during a collaborative learning session. In this case study, regression models are developed for collaboration quality and its sub-dimensions, and their generalizability is evaluated and reported. The framework helped us to systematically detect and report that the models achieved better performance when evaluated using hold-out or cross-validation but quickly degraded when evaluated across different student groups and learning contexts. The framework helps to open up a “wicked problem” in MMLA research that remains fuzzy (i.e., the generalizability of ML models), which is critical to both accumulating knowledge in the research community and demonstrating the practical relevance of these techniques.
dc.language.isoEN
dc.publisherMDPI AG
dc.subject.lccChemical technology
dc.titleEFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLA
dc.typeArticle
dc.description.keywordsMultimodal Learning Analytics
dc.description.keywordsMMLA
dc.description.keywordsface-to-face collaboration
dc.description.keywordsmachine learning
dc.description.keywordsgeneralizability
dc.description.keywordsevaluation framework
dc.description.doi10.3390/s21082863
dc.title.journalSensors
dc.identifier.e-issn1424-8220
dc.identifier.oaioai:doaj.org/journal:a3bd79c39fa44a42b6691e9eb94023e1
dc.journal.infoVolume 21, Issue 8


This item appears in the following Collection(s)

Show simple item record