- When students speed through a computer-based test, their responses are far less likely to be accurate than if they took longer to find the solution, according to a new research brief from the Northwest Evaluation Associaton, a nonprofit assessment company.
- When they’re “rapid guessing” in a multiple-choice format, students are also more likely to choose items in the middle rather than the first or last option, write Steven Wise, a senior research fellow, and Megan Kuhfeld, a research scientist — both with NWEA’s Collaborative for Student Growth. In addition, students tend to alternate between being disengaged in a test and exhibiting “solution behavior,” in which they are attempting to make a correct response based on what they know.
- Responses made while students are disengaged provide “little, if any” useful information about their achievement levels, the authors write, who recommend educators and assessment leaders keep this in mind when interpreting test results.
The researchers suggest that if an assessment is a high-stakes test, students are likely to be more motivated to get the right answer, and therefore, their responses — even if provided rapidly — should count.
“If a student is taking a high-stakes test to attain something they want — such as a course grade, acceptance into a college or gaining a credential to get a job — we generally expect that they will be engaged as they try to succeed at this hurdle standing in the way of getting what they want to attain,” Wise said in an email. “In this context, disengagement — though unfortunate — would be viewed as simply their not demonstrating the test performance needed, and test givers would usually not feel compelled to ignore rapid guesses during scoring.”
But if the stakes for an individual student are low — as with an assessment that is being used to measure proficiency rates among a group of learners — then the rapid responses should not be included in the scoring “because their presence will diminish score accuracy,” he said. The goal for educators and researchers, they write, should be to develop assessments that are more engaging for students so they don’t temporarily check out during the process.
One way assessment experts are trying to do that is through game-based assessment, in which elements of video games and puzzles are used to measure students’ skills. While much of that work still exists at the research and pilot phase, some models are moving toward broader implementation.
With more states and districts exploring performance-based and alternative assessments, a 2016 study by researchers at Stanford University showed making a test relevant to students’ prior experiences and basing an assessment on an authentic or “real-world” issue were among the features that held students’ interest.
The study, which included interviews with students and teachers at four high schools in the San Francisco Bay Area, also showed that collaboration, tapping into higher-order thinking skills, providing students with some autonomy, and incorporating some self-assessment or reflection prompts emerged as important ways to keep students engaged.
“The development of standardized assessments does not have to come at the expense of student engagement,” the Stanford researchers wrote. “The time is ripe with opportunity — especially with the introduction of the Every Student Succeeds Act, which explicitly specifies assessment provisions such as performance tasks that may offer opportunities to increase student engagement.”