Scantegrity Responds to Rice Study on Usability of the Scantegrity II Voting System
(2014)
Debunking a Flawed Study: Why Scantegrity II’s Usability Isn’t as Bad as Rice Claimed
In the high-stakes world of election technology, trust is everything. When a study from Rice University suggested that advanced voting systems like Scantegrity II were failing voters, it raised serious concerns. But a detailed rebuttal from the very team behind Scantegrity II exposes critical flaws in that Rice study, revealing that its damning conclusions were based on a misrepresentation of the system and flawed testing methodology. This isn’t just academic squabbling; it’s about ensuring that the public has confidence in the integrity of our voting processes.
The Problem: A Study That Missed the Mark
The Rice University study, published in the Journal of Election Technology and Systems, claimed that only 58% of votes cast on “tamper-resistant” systems like Scantegrity II were successfully counted in their lab experiment. This alarming statistic suggested these advanced voting systems were fundamentally unusable. The researchers focused on Scantegrity II, an “end-to-end verifiable” system designed to let voters confirm their vote was recorded correctly without compromising secrecy. The implication was that adding security features made voting confusing and error-prone for ordinary citizens.
Why This Matters: Trust in Our Democracy
Voting system usability isn’t a trivial detail; it’s foundational to democratic legitimacy. If voters can’t cast their ballots correctly, or if they distrust the process, faith in elections erodes. End-to-end verifiable systems like Scantegrity II represent a crucial advance, offering transparency that traditional systems lack. They allow voters to verify their vote was counted as cast while maintaining ballot secrecy – a powerful tool against fraud and error. A flawed study suggesting these systems are unusable could stall progress and leave us stuck with less secure, less transparent technology. It’s essential to get the facts right.
The Key Findings: Flaws in the Rice Study
The Scantegrity team’s rebuttal meticulously dismantles the Rice study’s methodology:
-
Wrong System, Wrong Results: The Rice researchers didn’t test the actual Scantegrity II system used in the successful 2009 Takoma Park, Maryland, municipal election. Instead, they built a version with a separate ballot scanner and ballot box. In the real system, the scanner is integrated with the ballot box – you scan the ballot, and it drops directly inside. This forced voters in the lab to perform two separate actions (scan then drop), creating an unnecessary hurdle. The real system makes this impossible to mess up, ensuring 100% of votes were counted in Takoma Park.
-
Confusing Instructions: The study gave voters instructions based on the integrated Takoma Park system, even though their lab setup required the separate scan-and-drop steps. This mismatch likely confused participants, contributing to errors unrelated to Scantegrity’s core verification features.
-
Blaming the Wrong Layer: Scantegrity II has a “layered” design. It works like a standard optical scan system (familiar to many voters) but adds optional verification steps (like using a special pen to reveal a confirmation code). The Rice study’s failures weren’t about these optional verification steps; they were about voters failing the basic scan-and-drop process of the underlying optical scan layer – a problem entirely caused by the researchers’ poor implementation, not Scantegrity’s design.
-
Missing Control Group: A proper usability study needs a “control” – in this case, testing a standard optical scan system implemented exactly like the flawed Scantegrity-inspired lab setup. Without this, you can’t isolate whether the problems came from the basic system design or the added verification features. The Rice study lacked this critical comparison.
-
Real-World Success vs. Lab Failure: The proof is in the pudding. In the actual 2009 Takoma Park election using the correct integrated system, 100% of 1,728 votes were successfully cast and counted. The system was used again successfully in 2011. This real-world success directly contradicts the lab’s abysmal 58% “counted” rate.
Beyond the Core Flaws
The rebuttal also highlights other issues:
- No Help for Voters: In the lab, researchers refused to answer voter questions, unlike real elections where poll workers assist.
- Insufficient Instructions: Voters in the lab only got printed instructions, while Takoma Park provided videos, posters, and multiple touchpoints.
- Misunderstanding Features: The Rice paper incorrectly claimed Scantegrity II requires random candidate order (it doesn’t) and that receipts are stamped (they aren’t, for security reasons).
The Bottom Line: A Call for Better Research
The Scantegrity team doesn’t dismiss the importance of usability testing – they applaud the effort. However, they insist that studies must be rigorously designed. The Rice study’s conclusions about Scantegrity II’s usability are unsupported because it tested a non-standard, poorly implemented version of the system and failed to isolate the impact of its innovative verification features. Its failures stemmed from basic implementation errors in the underlying voting mechanism, not the advanced security Scantegrity provides.
For democracy to thrive, we need voting systems that are both secure and easy to use. Flawed studies risk derailing progress towards these goals. The Takoma Park experience proves Scantegrity II can work flawlessly in the real world. The lesson? When evaluating complex systems, especially those as critical as our voting infrastructure, precision in methodology and a deep understanding of the technology being tested are non-negotiable. Only then can we build the trustworthy elections the public deserves.