The key problem when working with biometric parameters is their very nature - living flesh is changeable. To prevent scanners from freezing and giving false results due to a minor defect or scratch on the user's skin, their recognition algorithms have a margin of error. Roughly speaking, the system can agree to identify the person even if the scanned image is not 100% similar to the reference.
Each system has its own algorithms, tolerances and conditions, but they are - this is a fact. And a group of anonymous hackers based on this principle created the tool "DeepMasterPrints". It is a neural network trained to generate artificial fingerprints that will subtly resemble thousands of others. For this, the neural networks showed a lot of real fingerprints so that it worked out algorithms for constructing some averaged options.
It is impossible to create a single fake print for all occasions, but DeepMasterPrints allows you to choose the right one for a particular lock. It all depends on the rate of false positives of the device, the very permissible error. For example, if the scanner is ready to miss prints with a difference with the reference of 1%, then the neural network will hack it with a 77% probability. For a more rigorous system, with an error of 0.1%, this value is already 22% - not so much, but enough to take risks.
Scans of real prints (left) and generated DeepMasterPrints (right)
Moreover, even the most paranoid algorithms, which do not even allow fingerprints with a 0.01% difference, can be hacked! Yes, DeepMasterPrints only give a 1% chance of success, but that's two orders of magnitude higher than the requirements of the security system itself. It is unlikely that this development will put an end to the use of fingerprinting for authorization in digital systems, but this is an excellent reason to think about improving the verification mechanisms and self-monitoring of such devices.