Pages

Red Flag On Biometrics: Iris Scanners Can Be Tricked

Red Flag On Biometrics: Iris Scanners Can Be Tricked

At the Black Hat Security Conference in Las Vegas this week, Javier Galbally revealed that it’s possible to spoof a biometric iris scanning system using synthetic images derived from real irises. The Madrid-based security researcher’s talk is timely, coming on the heels of a July 23 Israeli Supreme Court hearing where the potential vulnerabilities of a proposed governmental biometric database drove the debate. Consider the week’s events a reminder that if the adoption of biometric identification systems continues apace without serious contemplation of the pitfalls, we’re headed for trouble.

When it comes to the collection and storage of individuals’ digital fingerprints, iris scans, or facial photographs, system vulnerability is a chief concern. A social security number can always be cancelled and reissued if it’s compromised, but it’s impossible for someone to get a new eyeball if an attacker succeeds in seizing control of his or her digital biometric information.

Among all the various biometric traits that can be measured for machine identification -- such as fingerprints, face, voice, or keystroke dynamics -- the iris is generally regarded as being the most reliable. Yet Galbally’s team of researchers has shown that even the method traditionally presumed to be foolproof is actually quite susceptible to being hacked.

The project, unveiled for the first time at the security researchers’ conference, made use of synthetic images that match digital iris codes linked to real irises. The codes, which are derived from the unique measurements of an individuals’ iris and contain about 5,000 pieces of information, are stored in biometric databases and used to positively identify people when they position their eyes in front of the scanners. By printing out the replica images on commercial printers, the researchers found they could trick the iris-scanning systems into confirming a match.

The tests were carried out against a commercial system called VeriEye, made by Neurotechnology. The synthetic images were produced using a genetic algorithm. With the replicas, Galbally found that an imposter could spoof the system at a rate of 50 percent or higher. A Wired article hit on the significance of this discovery:
This is the first time anyone has essentially reverse-engineered iris codes to create iris images that closely match the eye images of real subjects, creating the possibility of stealing someone’s identity through their iris.
This revelation not only exposes a security hole in a commercial iris-recognition system, but also proves that prominent tech firm and FBI contractor B12 Technologies -- which is building a database of iris scans for the Next Generation Identification System -- was wrong when it when it noted on its website that biometric templates “cannot be reconstructed, decrypted, reverse-engineered or otherwise manipulated to reveal a person’s identity.”

Any new detection of biometric system flaws is relevant in the context of the massive governmental identification programs moving forward at the global level. There’s India’s bid to create the world’s largest database of irises, fingerprints and facial photos, for example, and Argentina’s creation of a nationwide biometric database containing millions of digital fingerprints. Just this week in Israel, High Court justices criticized a planned biometric database as a “harmful” and “extreme” measure.

Lawmakers who approve such identification schemes should give serious consideration to any new information surfacing about biometric system vulnerabilities.

No comments:

Post a Comment