Jorge Luis Bazán is Associate professor of the Department of Applied Mathematics and Statistics of the Institute of Mathematics and Computer Science of the University of São Paulo (SME / ICMC / USP), Coordinator of the Interinstitutional postgraduate program in Statistics UFSCAR-ICMC USP and Coordinator of Latent Variable Research Group. He develops research on the area of Data Science and Statistics, working mainly on: regression models, latent variable models, item response theory models, cognitive diagnostic models, Bayesian inference, categorical data, psychometrics and statistical education. He is a Statistical Engineer, by the National Agrarian University La Molina of Peru (1997), Psychologist, by the National Major University of San Marcos of Peru (2003), PhD in Statistics by the Institute of Mathematics and Statistics of the University of São Paulo (2005). He did a postdoctoral stay at the Department of Mathematics Didactics at the University of Granada in 2009. I was a visiting FAPESP researcher at IME-USP in 2010 and a foreign visiting professor in 2012 at IMECC at UNICAMP and visiting scholar 2018-2019 at the Department of Statistics of the University of Connecticut in the United States.
Title: Classification in educational data: Cognitive diagnostic models using different R packages
Abstract: In recent years the Cognitive Diagnosis Models (CDMs) have gain considerable space in literature. Different methods were already considered, taking also in account diverse scoring methodologies. CDMs are useful psychometric tools for identifying test-takers’ profile or level of possession of a set of latent attributes underlying a latent variable; the latent variable may be a cognitive skill (say, mathematics achievement), a psychological trait, or an attitude. In this workshop we will talk about the use of Classical and Bayesian approach to the estimation of parameters of the Cognitive Diagnostic models (CDM) using different R packages. Specifically, we showed the codes to reproduce an application from the paper da Silva, de Oliveira, Davier and Bazán (2018) and give some comments about the use of this type of models in the Educational Assessment.
Alina A. von Davier is a psychometrician and researcher in computational psychometrics, machine learning, and education. Von Davier is a researcher, innovator, and an executive leader with over 20 years of experience in EdTech and in the assessment industry. She is the Chief of Assessment at Duolingo, where she leads the Duolingo English Test research and development area. She is also the Founder and CEO of EdAstra Tech, a service oriented EdTech company. In 2022, she joined the University of Oxford as an Honorary Research Fellow, and Carnegie Mellon University as a Senior Research Fellow. Von Davier completed a M.S. in Mathematics from the University of Bucharest in 1990. In 2000, she earned a doctorate in mathematics from Otto von Guericke University Magdeburg. In 2019 she completed classes in an Executive MBA program at Harvard Business School. Research interests of Alina involve developing novel approaches to test development using techniques incorporating AI, machine learning, data mining, computational psychometrics, Bayesian inference methods, and stochastic processes and in the developing methodologies for measuring social and emotional skills, such as critical thinking, creative thinking, and collaboration.
Title: AI-Driven content generation for educational assessment: Implications for teaching, testing, and the future of education
Abstract: As artificial intelligence (AI) continues to advance, its applications in the realm of educational assessment are becoming increasingly significant. This presentation explores the potential of AI-driven content generation in educational assessment and its implications for teaching practices and the future of education. By leveraging the power of natural language processing and deep learning algorithms, large language models (LLMs) and other large computational models are now capable of generating contextually relevant, diverse, and high-quality content for educational assessments (text, images, animation, voice, etc). This revolutionizes the way educators and developers design, administer, and evaluate assessments, allowing for greater efficiency and a more personalized learning and testing experience for students. The implementation of AI-driven content generation in testing presents numerous opportunities for teachers, students, and test developers from the assessment industry. For teachers, it offers the potential to streamline classroom longitudinal quiz creation, reduce bias, and improve the validity and reliability of these evaluative data. For students, it promises a more engaging and adaptive assessment experience, tailored to their individual learning needs and preferences. For test developers it offers an efficient way to scale up the number of items needed to protect the security of the test. However, the integration of AI in educational assessment also raises several concerns and challenges. These include issues of construct relevance, cheating, data privacy and security, the potential for perpetuating existing inequalities in education, and the ethical considerations surrounding the use of AI-generated content. In this presentation I will provide an analysis of the current state of AI-driven content generation in educational assessment, discuss its potential impact on teaching practices, and present a vision for the future of education in light of these advances. I will illustrate the application of LLMs for generating test questions within the theoretical ecosystem of the digital-first assessments such as the Duolingo English Test (DET) and discuss the newly developed DET Responsible AI Standards. Ultimately, I hope to contribute to a meaningful dialogue on how AI can be harnessed to revolutionize educational assessment and teaching practices while addressing the associated ethical and societal concerns.
Inés M. Varas is PhD in Statistics from Pontificia Universidad Católica de Chile. Her research interests are related, but no limited, to statistical modeling in psychometrics with emphasis in test equating methods and its applications to other research areas including biostatistics. It has been working on the Chilean Ministry of Health and as a consultor for the Department of Educational Evaluation, Measurement and Registration (DEMRE, Chile).
Title: Latent models for linking measurements
Abstract: Equating is the most popular linking method used to adjust scores on different test forms so that scores can be used interchangeably. These methods map the scores of test form X into their equivalents on the scale of test form Y by using scores distributions. Equating methods tackle differences in distributions attributed to differences in the difficulty of the forms. To overcome differences in the score distributions attributed to differences in the ability of test takers different data collection designs are considered. Although test score scales are usually subsets of integer numbers, in the equating literature the mapping estimation is based on continuous approximations of score distributions. Thus, equated scores are no longer discrete values. Varas et al. (2019, 2020) proposed the latent equating method to obtain discrete equated measurements based on a latent representation of scores distributions and a Bayesian nonparametric model for it. An extension of the latent method is proposed to be applied on different sampling designs. It is included the non-equivalent anchor test design (NEAT) where common items are used to link scores of test takers sampled from different populations. Several methods are discussed to evaluate the performance of the extension applied to simulated and real datasets.
Paula Fariña holds a Bachelor’s degree in Economics from the University of San Andrés, Buenos Aires, Argentina (2001), and a Ph.D. in Statistics from the Pontifical Catholic University of Chile (2010). Since 2011, she has been working as an Assistant Professor at the School of Industrial Civil Engineering at Diego Portales University, Santiago, Chile. Throughout her career, she has taught courses in Statistical Inference, Econometrics, Bayesian Statistics, among others. Her research areas include psychometrics, focusing on models based on Item Response Theory (IRT); econometrics, specifically discrete choice models; and technology applied to education, particularly Computerized Adaptive Tests.
Title: Bayesian networks in computerized adaptive test for statistical learning
Abstract: Bayesian Networks are a powerful tool for modeling complex relationships between variables in various fields, including education. In particular, they are increasingly being used in Computerized Adaptive Learning (CAL) to personalize the learning experience for students. By incorporating Bayesian Networks in CAL, the system can adapt to the student’s needs, abilities, and learning preferences, providing a more effective and efficient learning experience. In this workshop we will explore the use of Bayesian Networks, including how they can be used to model student knowledge, track learning progress, and provide personalized feedback and recommendations. A CAL App designed for Statistical Learning is also presented as an example.
Walter L. Leite is Professor of the Research and Evaluation Methodology Program in the School of Human Development and Organizational Studies of College of Education, University of Florida and Director of The Virtual Learning Lab. His work in the Virtual Learning Lab focuses on causal inference using machine learning and log data from virtual learning environments and intelligent tutoring systems. He has published extensively on model-based approaches to estimate the effects of virtual learning environments on student achievement using propensity score analysis, multilevel modeling, structural equation modeling, and finite mixture modeling. His book Practical Propensity Score Methods Using R disseminates methodology to implement propensity score matching, weighting, and stratification for cross-sectional, longitudinal, and multilevel data. He also created the podcast series Fairness and Equity in AI for Education which is available in all major podcasting platforms. He teaches Structural Equation Modeling, Machine Learning for Causal Inference, Quasi-Experimental Design and Analysis in Education, and Survey Design and Analysis in Education. He is passionate about openness and reproducibility in research. Code for his projects is available in Open Science Framework. He has authored two R packages: SEMsens implements sensitivity analysis methods for structural equation modeling, and ShortForm selects items for short forms of psychometric scales optimizing on criteria selected by the researcher. He currently serves on the editorial boards of the journals Structural Equation Modeling, Multivariate Behavioral Research, and the Journal of Experimental Education. He has collaborated extensively with the Lastinger Center for Learning of the University of Florida on evaluations of educational products used by hundreds of thousands of students annually in Florida.