Saturday, August 16, 2003

More on e-Voting


Kudos to K for the tip! Many links from Bruce Schneier's Crypto-Gram:

The software running on touchscreen machines in an individual voting booth at a precinct:

E-Voting Flaws Risk Voting Fraud
Analysis of an Electronic Voting System (PDF)
Bald-Faced Lies About Black Box Voting Machines and The Truth About the Rob-Georgia File

The software on the server used to store and report votes from multiple precincts:

Bigger Than Watergate!
Inside a U.S. Election Vote Counting Program
Voting Machines Blasted by Scientists
How To Rig An Election (mostly material from earlier Scoop reports)

General or related to both applications:

System Integrity Flaw Discovered at Diebold Election Systems (distribution of patches and/or test code for either system)
** (404 - might just be me, Sid)
Proprietary Voting Computers: Threat or Menace?
Could the Next US Election Be Stolen?


I'm going to comment on something that seems small, but isn't:

For a mission-critical system (say, software that's going to run on the space shuttle, or, perhaps, an electronic voting system), the audit and quality control process for developing the software is as mission-critical as the software itself. If you can't prove that you validated X, or that you developed Y under specific controls in order to keep as many feature flaws[*] and outright bugs[**] out of the software as possible, then you cannot release that software for use with any confidence.

People's lives, or the life of a government, perhaps, are depending on this software. The development process must be very well documented and adhered to. If it's not, you can't trust the application.

So, when you read about the Rob-Georgia file over at Scoop, or one of Bev Harris's earlier articles (the System Integrity Flaw article) on the use of an FTP server to distribute software patches at the last minute before an election, you have to understand that if those patches haven't been rigorously tested (and the 'Rob' article indicates otherwise) the entire touchscreen application on the machine a patch has been installed on is suddenly suspect. Because you don't *know* what the software will do, because that version of the application has not been tested, validated, or certified by any kind of oversight group. Because it's a last-minute patch, and that's the definition of 'last minute'. A developer pounded out a fix, performed the barest minimum of testing (because that's all a developer can do), and released a revised .cpp or .h file for use. "Here!" they said, wiping sweat off their brow, "this should fix it!" -- that's how it happens. Honest to god.

Was a standard development methodology followed (requirements-design-develop-test-release)? Were design and code reviews used? Where is the documentation - the audit trail - that proves it?

[*] a mistake in the design itself -- "we thought we wanted it [the code] to send an email, but now we think it should print a report."
[**] a mistake in implementing some aspect of the design -- "the code prints the same report every time regardless of data."


No comments: