In low-stakes tests, students’ incentives to invest their maximum effort might not be optimal. Thus, differences in test performance may reflect not only differences in student knowledge but also differences in student effort. Research stresses that ignoring effort when interpreting results from low-stakes assessments can lead to biased interpretations. An important related question is to what extent test structure, in particular, question difficulty order, could affect student performance and effort. In this project, we use data from the Programme for International Student Assessment (PISA), a low-stakes test, in which the order of questions and their levels of difficulty vary randomly across test booklets. We study the role that the difficulty of a prior set of questions might have in shaping performance and effort throughout the test, whether this effect differs by subject, and whether it helps explain gender achievement gaps.
Director
Gema Zamarro, Ph.D.
Research Fellows
Albert Cheng, Ph.D.
Jay P. Greene, Ph.D.
Jonathan N. Mills, Ph.D.
Julie R. Trivitt, Ph.D.
Lina Anaya
Molly Beck
Dillon Fuchsman
Katherine Kopotic
Matthew Lee