Face value: The Ethics of Facial Recognition Biometrics

The average person has a whopping 100 passwords that they use to access their various online accounts, a study by NordPass reveals. With the key to keeping all these accounts secure, being passwords that are long, complicated and hard to guess, the result is that they are also impossible to remember, making us more inclined to write them down or reuse the same password across several accounts, neither of which are very secure options.

The need for quick and easy experiences in the digital world means consumers have little time or patience for forgotten passwords, or creating long and complicated combinations of letters, symbols and numbers. In addition, organisations like banks are scaling back on brick-and-mortar premises and are increasingly using digital onboarding, making fast and secure online identity verification solutions essential.

Hence the rise of biometrics and liveness detection, where your unique body is your ultimate password. Gur Geva, Co-Founder and CEO of iiDENTIFii, a provider of face-based biometric identity verification services, says, “Liveness is the guarantee that the individual attempting to authenticate is a real person, not a mask, a bot, or a deepfake. This is critical especially for financial institutions where security and compliance are paramount.”

Despite widespread adoption, there has been some negative press about facial biometrics and whether it encroaches on the individual’s right to privacy. In fact, facial biometrics were recently banned for use by police and local agencies in several cities in the US. “This has stemmed from a failed plan by the IRS in the USA to force Americans to scan their faces to file their tax returns. There was some concern around the fact that a third party was collecting the data, raising questions around the ethics of this practice.” Racial concerns have also been expressed about the ability of facial recognition technology to correctly identify those of ethnic descent.

While privacy and accuracy concerns are major considerations, Geva warns against painting all facial recognition technology with the same brush. “Consent is key. iiDENTIFii’s technology is opt-in only, and the data it collects is not used to track people in the streets,” assures Geva. In addition, iiDENTIFii’s solution is specifically designed and calibrated for the country where it is used, meaning that the algorithms do not suffer the same challenges for which the overseas technology has been criticised. “Deploying European technology in Africa is destined to be discriminatory, but by using the right technology and having the right intentions in place, it is possible to create the most secure authentication ever used to date,” he notes.

The reality is that machines are better at identifying people than humans will ever be. The Department of Homeland Security in the US recently proved that facial recognition algorithms can now correctly identify individuals up to 96 percent of the time. “If that can reduce the likelihood of a wrongful arrest, or someone gaining access to someone else’s accounts, the technology needs to be given some credit for the good that it achieves,” maintains Geva.

Ultimately, customers need to feel comfortable using the technology and be confident that it is trustworthy while enhancing their safety and security. “Fortunately, people’s favourable disposition towards taking selfies is helping to drive rapid adoption of this technology, and iiDENTIFii’s proven credibility, as well as its opt-in feature eases possible misgivings about using it. These factors, combined with South Africa’s sophisticated digital onboarding processes and forward-thinking POPI act, means local businesses are fast becoming leaders in a growing global digital market.”