Think your fingerprint secured devices are safe? Think again.

Scientists have successfully developed synthetic fingerprint images which can potentially mimic a real print. The possibility of this has given rise to a critical threat for Biometrics. The research into this is at an educational level to create awareness regarding errors in the system and a possible approach to develop a protective shield. However, it has also constructed the fact that in an absence of a protective shield, systems and data is always at the risk of breach by bad actors.

A research group has published their findings in October 2018, led by Phlip Bontrager, showcased the results in Los Angeles.

When we say technology knows no bound, we actually mean it. Is it always for the good? No. The idea of progress and advancement is not defined by human ethics. Rather its a system that runs on logic and constantly works on removing flaws from a system. From a human perspective, the flaws maybe to reduce error but from a machine perspective it is a two-way fight. That means to develop, create and innovate to reduce errors but also create errors to test the system.

With the concept of Biometrics introduced in technology’s sphere, the definition of ethics, and logic keeps on evolving. The oldest biometric trick for authentication purpose is undoubtedly the oldest. Access to mobile phones, buildings, facilities, information or databases; all have one common feature which is gathering verification data in shape of “fingerprints”. For long this authentication measure has been around. Even after launch of Face Recognition, Voice authentication and such, Fingerprint Verification has had its perks. It’s easier, available and cost-effective. This is why it has become a go-to choice for many stakeholders. Although, as discussed above, the utility of Fingerprint Verification is not always optimal.

Scientists develop fake prints to cheat security systems

A research group in New York, by the name of DeepMasterPrints have found a way to dupe a scanner by using fake synthetic copies of original fingerprints. By analysing thousands of real fingerprints the system was able to create an original print which had edges and angles identical to the former. From a computer or human eye they look same as an actual human print. Many scanners don’t have an efficient enough working to detect a fake print. Rather they simply collect a print and only match it against the ones already stored in the system. This means that if a scanner collects a fake print but is able to partially match it against an already store print image then it is likely to approve the request. This eventually results in a massive breach which not only puts confidential data or facilities at risk but also effects individual privacy.

Phone scanners, or wall mounted scanner aren’t warp-around. They have a flat surface which means that while submitting a finger’s surface it only captures the image of a partial print. The scanners or authentication devices do not blend partial images into creating one “master” image. Due to which there is no 100% accurate resource present in the system to act as a security barrier. Adding more to it, the sensors which capture the far-sighted image rather than a close upfront image.


The researchers at NYU developed their system by training it on neural networks that compose the mesh of a human fingerprint. With this they developed a set of “possibilities” in the shape of a large number of fingerprint copies. After testing it against a scanner it was found out that 77 percent of the time they were able to cheat the scanners. The tests were attempted against VeriFinger, a device commonly used on enterprise and Government level. This is similar to how hackers cheat a password protect system: where they a run a loop of common passwords against the system. This trick has an overwhelming success rate of 68%. Hence, the use of two-factor authentication by almost all password protect devices in case of suspicious activity.

An open vulnerability like this can lead to potential hazards for stakeholders involved. The fact that a hacker may be able to cheat and get access into a system via one route, there is a 100% chance that they can potentially trick a lot of different access points as well. This pose the question whether a security protocol at enterprise level is of need, or individual security points need to be introduced in the system.

The research group has utilised deep learning methods to train data on specific algorithms which are involved in fingerprint matching. The technology involved is smart enough to create hundred replicas of a single image on a variety of data points. These data points are governed by the “standard” angles that are present in a human fingerprint. These angles are shared attributes which increase the likelihood for a fake print in succeeding. Typically these attributes are termed as “master key” which able to break the system.

What is the solution?

The purpose of the research is to prompt stakeholders and Governments to increase the security of biometrics and to introduce instructions or standards that can achieve all this. A lot of high-level data is protected by fingerprints. Registering with a Government agency, Banking, Education, Healthcare and such are sensitive areas where the practise of fingerprint authentication is relatively high. NYU and MSU researchers published an in-depth research paper explaining this phenomenon. The paper won a national level conference on Biometrics and Security supported by United States National Science Foundation.

By a successful attempt to trick real systems and authentication devices it is clear that security may be at risk.

Are there alternates to currently in-use devices, or a complete range of new devices will be offered or developed? What is the solution? Is the strategy to improve fingerprint authentication is the answer, or an alternate approach toward biometric authentication is the solution?

These are some of the important questions which we must deal with in order to define the future of Fingerprint Biometrics.

This site uses Akismet to reduce spam. Learn how your comment data is processed.