The Problem with Proctoring (Part 1)
Sometimes students cheat. But students have always cheated–this is not a new phenomenon. What has changed somewhat as more of our teaching moves online is the instructor’s relationship to–and perception of–cheating. In an on-campus course, students often take a test together in the same room, under the watchful eye of the course instructor. It’s more difficult for students to look up answers online, or work with someone else to find the right answers. Instructors can feel like they have more control over the environment.
Test-taking online can feel very different. That’s why so many instructors have turned to online proctoring programs in an effort to maintain the “integrity” (and their “control”) of online testing. Proctoring software tools work by monitoring students through their webcams and/or through taking control of functions on their computers. Many proctoring programs track student eye movements to determine if students are looking away from the screen at potentially unauthorized materials. They often require a detailed scan of the room in which the student is taking a test. In addition to programs proctored by a computer algorithm, many proctoring services, such as ProctorU, also have options to have the exams proctored synchronously by a human. AI-proctored exams will gather all the data collected while the student is taking the exam and produce an incident report” that quantifies the likelihood that students were cheating based off of,among other factors, their eye or body movements, clicking/typing behavior, and other environmental flags including sounds or movement in the room. For Proctorio, this report is called a “suspicion score”.
Critiques of proctoring programs have ramped up in the last couple of years. A quick search reveals numerous reports of inaccuracies in the AI-produced reports, including students flagged because the AI system could not recognize black faces. In May 2021, ProctorU announced that they would be changing their model, and would no longer be reporting on potential student misconduct based solely on AI-produced reports due to these common inaccuracies. Instead, human proctors will review behaviors flagged by the system, increasing the cost of their program. Human proctoring online is not without its issues, including students feeling “creeped out” by a stranger monitoring them and the potential for harassment and invasion of privacy. For that matter, introducing a human proctor tasked with behavioral monitoring who has no connection to the student, faculty, institution, or institutional ethics processes is inherently problematic.
Some schools are starting to reject these proctoring programs due to reports of discrimination, unethical behavior from the company’s founder, litigiousness against students and educational technologists, emotional distress from students, and serious accessibility issues. Additionally, the use of proctoring programs raise concerns about the security and privacy of student data.
Outside of the ethical and equity concerns raised by the use of proctoring software, there is not really any compelling evidence to show that Proctorio reduces cheating. In fact, many students have found ways to work around restrictive proctoring systems. Research has shown that students tend to score lower on proctored tests, which could suggest that those students were cheating, but other studies have found that proctored tests cause more test-taking anxiety which could also contribute to lower scores. There’s also an argument that it is not really possible to compare proctored and non-proctored tests in a meaningful way.
In an Interview quoted in the New York Times, Proctorio CEO Mike Olson noted that the proctoring software did not penalize students for getting flagged; rather, the responsibility for reviewing the footage and/or reports and making a judgement about cheating was up to the instructor. He notes elsewhere that instructors establish what kind of behaviors are flagged as being suspicious, not the program. Online proctoring returns that sense of control to us, but the fixation on controlling the student, the exam, and the environment obscures the purpose of the exercise to begin with: providing students an opportunity to demonstrate what they’ve learned and how they’ve grown.
At the end of the day, even if proctoring were extremely effective in discouraging cheating, we have to ask ourselves if it is worth it. If our teaching practices are meant to empower students to learn, and if we lead with empathy and work toward harm reduction in our teaching and learning spaces, can we defend the use of these programs for all the negative emotional, psychological, physical, and financial effects they have on students? Proctoring programs position our students as our adversaries, and orient our work to punishment, rather than pedagogy. We have an attachment to control only when it’s enacted as a mechanism of compliance, rather than support (see: letting human proctors flag ‘problematic’ student behavior.
The volume of links in the preceding paragraphs was intentional, to demonstrate the scale of issues raised by the use of remote proctoring tools, and the scale of damage these tools and practices are doing to our students. The evidence is clear: remote proctoring causes more harm than it prevents.
We assume that students are going to cheat, but we rarely consider any motivation for that cheating outside that a student is lazy, bad, or wrong. How would our practice change if, instead of designing our courses, assignments, and assessments to stop cheating we instead decided to try to trust and understand our students? Here, our focus would instead be on engagement, pedagogy, and crafting learning experiences in which our students would not want to cheat (or could not cheat, even if they wanted to.)
In part two of this blog, we’ll explore some alternatives to proctoring for assessments as well as strategies for designing for engagement to increase academic integrity.
Thanks to Dr. Jason Drysdale for reviewing and editing this piece.