What is the purpose of this dashboard?
The Dutch Learning Analytics Scan 2025 was initiated by the Npuls Best and Worst Practices Learning Analytics team. Its purpose is to illuminate Learning analytics activities within Dutch Tertiary Education.
The team based the Dutch scan on three pillars:
The survey cut-off point was November 2025. The library search in October 2025. The Npuls team was decommissioned at the end of 2025.
This dashboard is the summary of the scan.
Why Learning Analytics counts
The application of Learning Analytics (LA) is on the rise. Its techniques provide tangible, evidence-based, positive results, for example to adapt learning strategies. Often, LA is used in combination with AI interventions to support students during their learning, for example, by creating adaptive learning paths or analysing the output from a conversation with ChatGPT. Learning Analytics may provide a safety net for AI use, providing data support for interventions or clear explanations to teachers and students.
LA deployments at scale face similar barriers to AI deployments, due to contextual factors, including interactions with students and teachers. These barriers include proving the intervention’s effectiveness, ethics, bias/fairness, data availability, infrastructure, culture, data literacy, governance, complexity, legal guidelines, and, at times, isolation from LA communities that can help you.
The team publications include:
If you are interested in Gen AI content generation workshops then feel free to contact:
Active members
I would like to acknowledge the feedback from the following experts, whose comments have been invaluable in improving this dashboard:
Alumini
I am willing to discuss potential collaborations or answer any questions you may have. For further details or discussion specifically about this dashboard, please contact:
Cut off date: December 2025
The organizational level was predominantly Universities. There were no projects mentioned that involved Generative AI. There were many types of barriers encountered and useful advice was provided.
The organizations involved in Learning analytics are:
The three types of data used were:
Advice
Suggested reading from the survey
There are numerous challenges to deploying Learning Analytics:
Fact | Value |
|---|---|
Number of Papers | 402 |
Current Year | 2025 |
First Year | 2011 |
Years covered | 15 |
Authors | 919 |
Dutch Authors | 412 |
Non-Dutch Authors | 532 |
Conferences | 134 |
Sponsors | 43 |
Citations | 10965 |
Pages published | 3817 |
Dutch
International
Communities and teams
Project Databases
It is our hope that this literature search will provide you with a taste of the activity and interests of researchers of Learning Analytics within the Netherlands.
The literature review is based on a Scopus abstract and citation database specifcally applying the search term
( TITLE-ABS-KEY (“LEARNING ANALYTICS“) AND AFFILCOUNTRY (Netherlands ) )
At the time of searching, October 2025, 402 papers were returned where one of the authors had an affiliation within the Netherlands.
The returned search results were returned in a text format (CSV) and were analysed via bibliometric methods.
Keyword | Associated Keywords |
|---|---|
EDUCATION | SELF-REGULATED LEARNING, BLENDED LEARNING, DISPOSITIONAL LEARNING ANALYTICS, MOOCS, LEARNING DISPOSITIONS, MACHINE LEARNING, LEARNING STRATEGIES, FORMATIVE ASSESSMENT, HIGHER EDUCATION, LEARNING DESIGN, E-TUTORIALS |
COMPUTER AIDED INSTRUCTION | DISPOSITIONAL LEARNING ANALYTICS, BLENDED LEARNING, SELF-REGULATED LEARNING, LEARNING STRATEGIES, E-TUTORIALS, LEARNING DISPOSITIONS, ONLINE LEARNING, WORKED EXAMPLES, MOOCS, FORMATIVE ASSESSMENT, SOCIAL COMPARISON |
LEARNING ANALYTIC | SELF-REGULATED LEARNING, MACHINE LEARNING, SOCIAL COMPARISON, DISPOSITIONAL LEARNING ANALYTICS, BLENDED LEARNING, MOOCS, FORMATIVE ASSESSMENT, E-TUTORIALS, SYSTEMATIC REVIEW, HIGHER EDUCATION, LEARNING DISPOSITIONS |
SELF-REGULATED LEARNING | MOOCS, EDUCATIONAL DATA MINING, SELF-REGULATED LEARNING, LEARNING DISPOSITIONS, TECHNOLOGY ENHANCED LEARNING, ABILITY LEVELS, PRIMARY EDUCATION, EDUCATIONAL TECHNOLOGIES, FEEDBACK, MOOC, PROCESS MINING |
TEACHING | SELF-REGULATED LEARNING, LEARNING DESIGN, LEARNING STRATEGIES, DISPOSITIONAL LEARNING ANALYTICS, HIGHER EDUCATION, MACHINE LEARNING, COLLABORATIVE LEARNING, E-TUTORIALS, MOOC, ARTIFICIAL INTELLIGENCE, BIG DATA |
ENGINEERING EDUCATION | LEARNING DISPOSITIONS, BLENDED LEARNING, FORMATIVE ASSESSMENT, DISPOSITIONAL LEARNING ANALYTICS, HIGHER EDUCATION, LEARNING DESIGN, MOOCS, SENSORS, LEARNING DASHBOARDS, ABILITY LEVELS, PRIMARY EDUCATION |
CURRICULA | ENGINEERING EDUCATION, HIGHER EDUCATION, LEARNING DESIGN, MULTIMODAL LEARNING ANALYTICS, TECHNOLOGY ENHANCED LEARNING, MOOCS, FEEDBACK, EDUCATIONAL DATA MINING, MOOC, PROCESS MINING, LIFELONG LEARNING |
BLENDED LEARNING | SELF-REGULATED LEARNING, LEARNING STRATEGIES, HIGHER EDUCATION, MULTIMODAL DATA, MACHINE LEARNING, TRACE DATA, ADAPTIVE SUPPORT, BIOSENSORS, LITERATURE REVIEW, PERSONALIZED SCAFFOLDS, REINFORCEMENT LEARNING |
HIGHER EDUCATION | LEARNING DESIGN, MOOCS, BLENDED LEARNING, FORMATIVE ASSESSMENT, LEARNING DISPOSITIONS, MOOC, LIFELONG LEARNING, CLUSTER ANALYSIS, INDIVIDUAL DIFFERENCES, LEARNING OUTCOMES, AEROSPACE ENGINEERING |
LEARNING PROCESS | SELF-REGULATED LEARNING, DISPOSITIONAL LEARNING ANALYTICS, BLENDED LEARNING, MULTIMODAL LEARNING ANALYTICS, LEARNING DISPOSITIONS, ARTIFICIAL INTELLIGENCE, MULTIMODAL DATA, HIGHER EDUCATION, MOOCS, FORMATIVE ASSESSMENT, MACHINE LEARNING |
Abstract: With the increase in available educational data, it is expected that Learning Analytics will become a powerful means to inform and support learners, teachers and their institutions in better understanding and predicting personal learning needs and performance. However, the processes and requirements behind the beneficial application of Learning and Knowledge Analytics as well as the consequences for learning and teaching are still far from being understood. In this paper, we explore the key dimensions of Learning Analytics (LA), the critical problem zones, and some potential dangers to the beneficial exploitation of educational data. We propose and discuss a generic design framework that can act as a useful guide for setting up Learning Analytics services in support of educational practice and learner guidance, in quality assurance, curriculum development, and in improving teacher effectiveness and efficiency. Furthermore, the presented article intends to inform about soft barriers and limitations of Learning Analytics. We identify the required skills and competences that make meaningful use of Learning Analytics data possible to overcome gaps in interpretation literacy among educational stakeholders. We also discuss privacy and ethical issues and suggest ways in which these issues can be addressed through policy guidelines and best practice examples. © International Forum of Educational Technology & Society (IFETS).
Abstract: This article introduces learning analytics dashboards that visualize learning traces for learners and teachers. We present a conceptual framework that helps to analyze learning analytics applications for these kinds of users. We then present our own work in this area and compare with 15 related dashboard applications for learning. Most evaluations evaluate only part of our conceptual framework and do not assess whether dashboards contribute to behavior change or new understanding, probably also because such assessment requires longitudinal studies. © 2013 SAGE Publications.
Abstract: Massive Open Online Courses (MOOCs) allow learning to take place anytime and anywhere with little external monitoring by teachers. Characteristically, highly diverse groups of learners enrolled in MOOCs are required to make decisions related to their own learning activities to achieve academic success. Therefore, it is considered important to support self-regulated learning (SRL) strategies and adapt to relevant human factors (e.g., gender, cognitive abilities, prior knowledge). SRL supports have been widely investigated in traditional classroom settings, but little is known about how SRL can be supported in MOOCs. Very few experimental studies have been conducted in MOOCs at present. To fill this gap, this paper presents a systematic review of studies on approaches to support SRL in multiple types of online learning environments and how they address human factors. The 35 studies reviewed show that human factors play an important role in the efficacy of SRL supports. Future studies can use learning analytics to understand learners at a fine-grained level to provide support that best fits individual learners. The objective of the paper is twofold: (a) to inform researchers, designers and teachers about the state of the art of SRL support in online learning environments and MOOCs; (b) to provide suggestions for adaptive self-regulated learning support. © 2018, © 2018 The Author(s). Published by Taylor & Francis Group, LLC.
Abstract: Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and teachers. Track data from learning management systems (LMS) constitute a main data source for learning analytics. This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs. In a large introductory quantitative methods module, 922 students were enrolled in a module based on the principles of blended learning, combining face-to-face problem-based learning sessions with e-tutorials. We investigated the predictive power of learning dispositions, outcomes of continuous formative assessments and other system generated data in modelling student performance of and their potential to generate informative feedback. Using a dynamic, longitudinal perspective, computer-assisted formative assessments seem to be the best predictor for detecting underperforming students and academic performance, while basic LMS data did not substantially predict learning. If timely feedback is crucial, both use-intensity related track data from e-tutorial systems, and learning dispositions, are valuable sources for feedback generation. © 2014 Elsevier Ltd. All rights reserved.
Abstract: In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area. © 2013 Springer-Verlag London.
Abstract: With the adoption of Learning Management Systems (LMSs) in educational institutions, a lot of data has become available describing students’ online behavior. Many researchers have used these data to predict student performance. This has led to a rather diverse set of findings, possibly related to the diversity in courses and predictor variables extracted from the LMS, which makes it hard to draw general conclusions about the mechanisms underlying student performance. We first provide an overview of the theoretical arguments used in learning analytics research and the typical predictors that have been used in recent studies. We then analyze 17 blended courses with 4,989 students in a single institution using Moodle LMS, in which we predict student performance from LMS predictor variables as used in the literature and from in-between assessment grades, using both multi-level and standard regressions. Our analyses show that the results of predictive modeling, notwithstanding the fact that they are collected within a single institution, strongly vary across courses. Thus, the portability of the prediction models across courses is low. In addition, we show that for the purpose of early intervention or when in-between assessment grades are taken into account, LMS data are of little (additional) value. We outline the implications of our findings and emphasize the need to include more specific theoretical argumentation and additional data sources other than just the LMS data. © 2008-2011 IEEE.
Abstract: Learning analytics can bridge the gap between learning sciences and data analytics, leveraging the expertise of both fields in exploring the vast amount of data generated in online learning environments. A typical learning analytics intervention is the learning dashboard, a visualisation tool built with the purpose of empowering teachers and learners to make informed decisions about the learning process. Related work has investigated learning dashboards, yet none have explored the theoretical foundation that should inform the design and evaluation of such interventions. In this systematic literature review, we analyse the extent to which theories and models from learning sciences have been integrated into the development of learning dashboards aimed at learners. Our analysis revealed that very few dashboard evaluations take into account the educational concepts that were used as a theoretical foundation for their design. Furthermore, we report findings suggesting that comparison with peers, a common reference frame for contextualising information on learning analytics dashboards, was not perceived positively by all learners. We summarise the insights gathered through our literature review in a set of recommendations for the design and evaluation of learning analytics dashboards for learners. © 2018 Copyright held by the owner/author(s).
Abstract: The widespread adoption of Learning Analytics (LA) and Educational Data Mining (EDM) has somewhat stagnated recently, and in some prominent cases even been reversed following concerns by governments, stakeholders and civil rights groups about privacy and ethics applied to the handling of personal data. In this ongoing discussion, fears and realities are often indistinguishably mixed up, leading to an atmosphere of uncertainty among potential beneficiaries of Learning Analytics, as well as hesitations among institutional managers who aim to innovate their institution’s learning support by implementing data and analytics with a view on improving student success. In this paper, we try to get to the heart of the matter, by analysing the most common views and the propositions made by the LA community to solve them. We conclude the paper with an eight-point checklist named DELICATE that can be applied by researchers, policy makers and institutional managers to facilitate a trusted implementation of Learning Analytics. © 2016 ACM.
Abstract: Technological advancements have generated a strong interest in exploring learner behavior data through learning analytics to provide both learner and instructor with process-oriented feedback in the form of dashboards. However, little is known about the typology of dashboard feedback relevant for different learning goals, learners and teachers. While most dashboards and the feedback that they give are based only on learner performance indicators, research shows that effective feedback needs also to be grounded in the regulatory mechanisms underlying learning processes and an awareness of the learner’s learning goals. The design artefact presented in this article uses a conceptual model that visualizes the relationships between dashboard design and the learning sciences to provide cognitive and behavioral process-oriented feedback to learners and teachers to support regulation of learning. A practical case example is given that demonstrates how the ideas presented in the paper can be deployed in the context of a learning dashboard. The case example uses several analytics/visualization techniques based on empirical evidence from earlier research that successfully tested these techniques in various learning contexts. © 2018 Elsevier Ltd
Abstract: Multimodality in learning analytics and learning science is under the spotlight. The landscape of sensors and wearable trackers that can be used for learning support is evolving rapidly, as well as data collection and analysis methods. Multimodal data can now be collected and processed in real time at an unprecedented scale. With sensors, it is possible to capture observable events of the learning process such as learner’s behaviour and the learning context. The learning process, however, consists also of latent attributes, such as the learner’s cognitions or emotions. These attributes are unobservable to sensors and need to be elicited by human-driven interpretations. We conducted a literature survey of experiments using multimodal data to frame the young research field of multimodal learning analytics. The survey explored the multimodal data used in related studies (the input space) and the learning theories selected (the hypothesis space). The survey led to the formulation of the Multimodal Learning Analytics Model whose main objectives are of (O1) mapping the use of multimodal data to enhance the feedback in a learning context; (O2) showing how to combine machine learning with multimodal data; and (O3) aligning the terminology used in the field of machine learning and learning science. © 2018 The Authors. Journal of Computer Assisted Learning Published by John Wiley & Sons, Ltd.
Abstract: Game-based learning researchers have been investigating various means to maximise learning in educational games. One promising venue in recent years has been the use of learning analytics in online game-based learning environments. However, little is known about how different elements of learning analytics (e.g. data types, techniques methods, and stakeholders) contribute to game-based learning practices within online learning environments. There is a need for a comprehensive review to bridge this gap. In this systematic review, we examined the related literature in five major international databases including Web of Science, Scopus, ERIC, IEEE, and compiled Proceedings of the International Conference on Learning Analytics and Knowledge. Twenty relevant publications were identified and analysed. The analysis was conducted using four core elements of learning analytics, namely the types of data that the system collects (what), the methods used for performing analytics (how), the reasons the system captures, analyzes, and reports data (why), and the recipients of the analytics (who). This study synthesises the existing literature, provides a conceptual framework as to how learning analytics can enhance online game-based learning practices in higher education, and sets the agenda for future research. © 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
Abstract: Background: While resection remains the only curative option for perihilar cholangiocarcinoma, it is well known that such surgery is associated with a high risk of morbidity and mortality. Nevertheless, beyond facing life-threatening complications, patients may also develop early disease recurrence, defining a “futile” outcome in perihilar cholangiocarcinoma surgery. The aim of this study is to predict the high-risk category (futile group) where surgical benefits are reversed and alternative treatments may be considered. Methods: The study cohort included prospectively maintained data from 27 Western tertiary referral centers: the population was divided into a development and a validation cohort. The Framingham Heart Study methodology was used to develop a preoperative scoring system predicting the “futile” outcome. Results: A total of 2271 cases were analyzed: among them, 309 were classified within the “futile group” (13.6%). American Society of Anesthesiology (ASA) score ≥ 3 (OR 1.60; p = 0.005), bilirubin at diagnosis ≥50 mmol/L (OR 1.50; p = 0.025), Ca 19-9 ≥ 100 U/mL (OR 1.73; p = 0.013), preoperative cholangitis (OR 1.75; p = 0.002), portal vein involvement (OR 1.61; p = 0.020), tumor diameter ≥3 cm (OR 1.76; p < 0.001), and left-sided resection (OR 2.00; p < 0.001) were identified as independent predictors of futility. The point system developed, defined three (ie, low, intermediate, and high) risk classes, which showed good accuracy (AUC 0.755) when tested on the validation cohort. Conclusions: The possibility to accurately estimate, through a point system, the risk of severe postoperative morbidity and early recurrence, could be helpful in defining the best management strategy (surgery vs. nonsurgical treatments) according to preoperative features. © 2024 John Wiley and Sons Inc.. All rights reserved.
Abstract: Scaffolds that support self-regulated learning (SRL) have been found to improve learning outcomes. The effects of scaffolds can differ depending on how learners use them and how specific scaffolds might influence learning processes differently. Personalized scaffolds have been proposed to be more beneficial for learning due to their adaptivity to learning progress and individualized content to learning needs. The present study investigated finer-grained effects of how personalized scaffolds driven by a rule-based artificial intelligence system influenced SRL processes, especially how students learned with them. Using a pre-post experimental design, we investigated personalized scaffolds based on university students’ real-time learning processes in a technologically enhanced learning environment. Students in the experimental group (n = 30) received personalized scaffolds, while the control group (n = 29) learned without scaffolds. All students completed a 45-minute learning task with trace data recorded. Findings indicated scaffold effects on students’ subsequent learning behaviour. Additionally, only scaffold interaction correlated to essay performance and suggests that the increase in frequencies of SRL activities alone does not contribute directly to learning outcomes. As guidelines for real-time SRL support are lacking, this study provides valuable insights to enhance SRL support with adaptive learning technologies. Practitioner notes What is already known about this topic Self-regulated learning scaffolds, especially adaptive scaffolds, improve learning. Personalized scaffolds have effects on self-regulated learning activities. Past research focused on aggregated effects of scaffolds. What this paper adds Investigates how students learn with personalized scaffolds in terms of frequencies of learning activities and scaffold interaction. Takes a closer look at which learning activities and when the effects of personalized scaffolds occur. Examines how finer-grained effects of personalized scaffolds correspond to learning outcomes. Implications for practice and/or policy Personalized scaffold effects vary across learning, and future research should consider finer-grained investigations of SRL support in order to better understand their influence on learning. The number of personalized scaffolds provided should be reconsidered in the future as students only use some of the support provided, especially when task demands increase. Personalized scaffold interaction is linked to improvement in task performance, so future research should also focus on students’ appropriate use of self-regulated learning support. © 2023 The Authors. British Journal of Educational Technology published by John Wiley & Sons Ltd on behalf of British Educational Research Association.
Abstract: The recent advances in educational technology enabled the development of solutions that collect and analyse data from learning scenarios to inform the decision-making processes. Research fields like Learning Analytics (LA) and Artificial Intelligence (AI) aim at supporting teaching and learning by using such solutions. However, their adoption in authentic settings is still limited, among other reasons, derived from ignoring the stakeholders’ needs, a lack of pedagogical contextualisation, and a low trust in new technologies. Thus, the research fields of Human-Centered LA (HCLA) and Human-Centered AI (HCAI) recently emerged, aiming to understand the active involvement of stakeholders in the creation of such proposals. This paper presents a systematic literature review of 47 empirical research studies on the topic. The results show that more than two-thirds of the papers involve stakeholders in the design of the solutions, while fewer papers involved them during the ideation and prototyping, and the majority do not report any evaluation. Interestingly, while multiple techniques were used to collect data (mainly interviews, focus groups and workshops), few papers explicitly mentioned the adoption of existing HC design guidelines. Further evidence is needed to show the real impact of HCLA/HCAI approaches (e.g., in terms of user satisfaction and adoption). © 2024 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
Abstract: This study explored the dynamics of students’ knowledge co-construction in an asynchronous gamified environment in higher education, focusing on peer discussions in college business courses. Utilizing epistemic network analysis, sequence pattern mining, and automated coding, we analyzed the interactions of 1,319 business students. Our findings revealed that externalization and epistemic activity were prevalent, demonstrating a strong link between problem-solving and conceptual understanding. Three primary discussion types were observed: argumentative, epistemic, and social, each with unique patterns of engagement and idea integration. Effective knowledge co-construction patterns included open-ended questions with an epistemic focus, debates serving as intense knowledge co-construction arenas, and social interactions fostering a supportive and collaborative learning environment. The introduction of gamification elements led to increased student engagement and participation. Our findings emphasize the significance of structured analysis, collaboration, and argumentation in promoting effective knowledge co-construction in peer learning settings. This study offers insights into the temporal interplay of discourse dimensions and their potential for collaborative learning, enhancing our understanding of how learning analytics can be employed to discover ways in which students co-construct knowledge in asynchronous gamified environments. © The Author(s) 2024.
Abstract: A core focus of self-regulated learning (SRL) research lies in uncovering methods to empower learners within digital learning environments. As digital technologies continue to evolve during the current hype of artificial intelligence (AI) in education, the theoretical, empirical and methodological nuances to support SRL are emerging and offering new ways for adaptive support and guidance for learners. Such affordances offer a unique opportunity for personalised learning experiences, including adaptive interventions. Exploring the application of adaptivity to enhance SRL is an important and emerging area of research that requires further attention. This editorial introduces the contributions of seven papers for the special section on adaptive support for SRL within digital learning environments. These papers explore various themes related to enhancing SRL strategies through technological interventions, offering valuable insights and paving the way for future advancements in this dynamic area. © 2024 British Educational Research Association.
Abstract: Digital games are widely used in education to motivate students for science. Additionally, augmented reality (AR) is increasingly used in education. However, recent research indicates that these technologies might not be equally beneficial for students with different background characteristics. Moreover, students with different backgrounds may differ in their self-efficacy and interest when playing games and this could lead to differences in performance. Given the increased use of games and immersive technologies in education, it is important to gain a better understanding of the effectiveness of games for different student groups. This study focused on the role of students’ socio-economic status (SES) and examined whether SES was associated with in-game performance and whether interest and self-efficacy mediated potential associations between SES and in-game performance. Since log data are increasingly used to predict learning outcomes and can provide valuable insights into individual behaviour, in-game performance was assessed with the use of log data. In total, 276 early secondary school students participated in this study. The results indicate that SES has no direct or indirect effect through self-efficacy and interest on in-game performance. However, a lower self-efficacy increased the likelihood to drop out of the game. These findings suggest that students from different socio-economic backgrounds are equally interested and self-efficacious while playing the game and that their performance is not affected by their background. The affordances of AR as an immersive learning environment might be motivating enough to help mitigate possible SES differences in students. Practitioner notes What is already known about this topic Digital games are an effective tool to increase motivation and learning outcomes of students. Students’ self-efficacy and situational interest influence learning outcomes and in-game performance. It is unclear whether digital games are equally effective for students with different socio-economic status. What this paper adds Socio-economic status (SES) of students does not affect in-game performance. Students with different SES are equally interested and self-efficacious. Lower self-efficacy and lower school track influence the likelihood of dropout. Implications for practice and/or policy Socio-economic status does not fortify the possible performance differences between students and games can be utilized as a learning tool that motivates all students equally. There are students who do not optimally benefit when games are implemented in education and who may need additional support. © 2023 The Authors. British Journal of Educational Technology published by John Wiley & Sons Ltd on behalf of British Educational Research Association.
Abstract: Research conducted using variable-centered methods uses data from a “group of others” to derive generalizable laws. The average is considered a “norm” where everyone is supposed to be homogeneous and to fit the average yardstick. Deviations from the average are viewed as irregularities rather than natural manifestations of individual differences. However, this homogeneity assumption is theoretically and empirically flawed, leading to inaccurate generalizations about students’ behavior based on averages. Alternatively, heterogeneity is a more plausible and realistic characteristic of human functioning and behavior. In this paper, we review the limitations of variable-centered methods and introduce—with empirical examples—person-centered and person-specific methods as alternatives. Person-centered methods are designed with the foundational assumption that humans are heterogeneous, and such heterogeneity can be captured with statistical methods into patterns (or clusters). Person-specific (or idiographic) methods aim to accurately and precisely model the individual person (at the resolution of the single subject sample size). The implications of this paradigm shift are significant, with potential benefits including improved research validity, more effective interventions, and a better understanding of individual differences in learning, and, more importantly, personalization that is tethered to personalized analysis. Educational relevance statement: Our study presents a primer on the importance of individual differences, heterogeneity and diversity in capturing the unique peculiarities of students. In doing so, we can offer relevant personalized support that is more equitable and individualized. © 2024 The Authors
Abstract: Background: Sustaining productive student–student dialogue in online collaborative inquiry learning is challenging, and teacher support is limited when needed in multiple groups simultaneously. Collaborative conversational agents (CCAs) have been used in the past to support student dialogue. Yet, research is needed to reveal the characteristics and effectiveness of such agents. Objectives: To investigate the extent to which our analytics-based Collaborative Learning Agent for Interactive Reasoning (Clair) can improve the productivity of student dialogue, we assessed both the levels at which students shared thoughts, listened to each other, deepened reasoning, and engaged with peer’s reasoning, as well as their perceived productivity in terms of their learning community, accurate knowledge, and rigorous thinking. Method: In two separate studies, 19 and 27 dyads of secondary school students from Brazil and the Netherlands, respectively, participated in digital inquiry-based science lessons. The dyads were assigned to two conditions: with Clair present (treatment) or absent (control) in the chat. Sequential pattern mining of chat logs and the student’s responses to a questionnaire were used to evaluate Clair’s impact. Results: Analysis revealed that in both studies, Clair’s presence resulted in dyads sharing their thoughts at a higher frequency compared to dyads that did not have Clair. Additionally, in the Netherlands’ study, Clair’s presence led to a higher frequency of students engaging with each other’s reasoning. No differences were observed in students’ perceived productivity. Conclusion: This work deepens our understanding of how CCAs impact student dialogue and illustrates the importance of a multidimensional perspective in analysing the role of CCAs in guiding student dialogue. © 2024 The Authors. Journal of Computer Assisted Learning published by John Wiley & Sons Ltd.
Abstract: Background: Developments in educational technology and learning analytics make it possible to automatically formulate and deploy personalized formative feedback to learners at scale. However, to be effective, the motivational and emotional impacts of such automated and personalized feedback need to be considered. The literature on feedback suggests that effective feedback, among other features, provides learners with a standard to compare their performance with, often called a reference frame. Past research has highlighted the emotional and motivational benefits of criterion-referenced feedback (i.e., performance relative to a learning objective or mastery goal) compared to norm-referenced feedback (performance relative to peers). Objectives: Despite a substantial body of evidence regarding reference frame effects, important open questions remain. The questions encompass, for example, whether the benefits and drawbacks of norm-referenced feedback apply in the same way to automated and personalize feedback messages and whether these effects apply to students uniformly. Further, the potential impacts of combining reference frames are largely unknown, even though combinations may be quite frequent in feedback practice. Finally, little research has been done on the effects of reference frames in computer-supported collaborative learning, which differs from individual learning in meaningful ways. This study aims to contribute to addressing these open questions, thus providing insights into effective feedback design. Specifically, we aim to investigate usefulness perceptions as well as emotional and motivational effects of different reference frames—and their combination—in automated and personalized formative feedback on a computer-supported collaborative learning task. Methods: A randomized field experiment with four feedback conditions (simple feedback, norm-referenced, criterion-referenced, and combined feedback) was conducted in a course within a teacher training program (N = 282). Collaborative groups worked on a learning task in the online learning environment, after which they received one of four possible automated and personalized formative feedback. We collected student data about feedback usefulness perceptions, motivational regulation, and achievement emotions to assess the differential effects of these feedback conditions. Results: All feedback types were perceived as useful relative to the simple feedback condition. Norm-referenced feedback showed detrimental effects for motivational regulation, whereas combined feedback led to more desirable motivational states. Further, criterion-referenced feedback led to more positive emotions for overperformers and to more negative emotions for underperformers. The findings are discussed in light of the broader feedback literature, and recommendations for designing automated and personalized formative feedback messages for computer-supported collaborative learning are presented. © 2024 The Author(s). Journal of Computer Assisted Learning published by John Wiley & Sons Ltd.
This dashboard is the graphical representation of the Dutch national Scan (2025) for Learning Analytics. The information displayed is based on the survey and enriched from data gathered from a Library Search based on the Scopus database.
This dashboard is rendered as a webpage and uses the R Markdown format with embedded coding in R. We would like to acknowledge a number of R packages that made this dashboard possible.
Limitations
There were three main limitations, which were: