Facial recognition, popular in any number of security applications, is beginning to show up on more and more mobile wireless devices, particularly smartphones. It really is a convenient and fairly effective security scheme across any number of ecosystems.
For wireless, the implications are huge because it makes security on a smartphone very simple. Just look into the device – no passwords, no login, just a smile. In this case it removes one of the more nagging issues with consumer smartphone security – getting the end user to actually use it.
Facial recognition, and other emerging biometric technologies, have the potential to make our lives so much more convenient, eliminating much of the existing cumbersome, mechanical, security processes. We are just on the cusp of the next generation of exciting biometric advances. These include blood vessel mapping, voice scans, facial thermography, DNA matching, odor sensing, blood pulse measurements, skin pattern recognition, nailbed identification, gait recognition, even ear shape recognition. Beyond that are concepts such as using brain and heart patterns as identifiers because they are as unique for each individual as fingerprints.
I like to use the phrase that with great technology comes great responsibility. However, for some time now those with great power have not shown the accompanying great responsibility, particularly in social media platforms (Facebook anyone?) and online retailers.
They are not alone. There is abuse of power beyond Silicon Valley – by both the bad actors andthose claiming to use it to protect the innocent. Apparently, both the good guys and the bad guys are forgetting the message.
On the home front it seems, ever since the last election, all types of governmental agencies, from federal down to municipalities, think they have free reign to do whatever they deem necessary in the name of “security,” – from questionable practices to outright illegal acts. It even stratifies to private security companies.
For example, on the private side a security firm was hired by a home owners association to install special cameras that record the license plates of all vehicle leaving and entering the property. The reasoning was that if a crime was committed, this type of database would help in apprehending the perpetrators. While there is some truth to that, is smacks of big brother-ism. The majority of unit owners were up in arms over being tracked every time they (and those they invite or know) come and go. There is currently a battle between unit owners and the HOA around privacy issues and this practice. It will, likely, end up in court.
On the authoritarian side one of the latest abuse examples is ICE and the FBI using facial recognition and scanning all types of IDs without any regard to privacy laws. Apparently, AI has become sophisticated enough to come up with very accurate images from marginal images.
The Washington Postclaims it obtained records dating back almost five years, which suggests the FBI and ICE have been using DMV databases to build a surveillance network without the consent of citizens. The FBI and ICE have amassed data on 641 million individuals, the vast majority of which are innocent and were not informed or notified of such activity.
States, as well, are scanning DMV records and applying AI-based facial recognition analytics. The legal community argues that carte blanche scanning of DMV photos, for nothing more than building a database, essentially treats everyone as a suspect.
In fact, once Congress got wind of it, they rounded up the FBI’s leaders to learn what they are doing, and why. Just before that, the House Committee on Oversight and Reform held hearings on the impact of facial recognition technology on civil rights and liberties. So this has caught Congress’ attention.
And privacy is not just a national concern – it is global. Our neighbor to the north, in what seemed like an innocent attempt to work towards goals such as smart cities, got hung up in the privacy game. The Canadian Civil Liberties Association (CCLA) got wind of a project concocted by Prime Minister Trudeau and Alphabet subsidiary Sidewalk Labs.
They were planning to outfit the Toronto area with smart city solutions, covering everything from energy use to transportation, by collecting and analyzing data. The CCLA is suing to shut the project down based on data privacy concerns. The basis of the CCLA suit is that Sidewalk Labs could not guarantee that affiliate parties accessing this, so-called, “urban data” would be able to keep it private.
Privacy issues are not new. We have been dealing with them long before the digital transformation. But the digital realm brought about a myriad of new ways to acquire data and security has yet to address them. The situation is that organizations of all types, essentially, have a “wild west” – a largely unregulated playing field, in which to romp.
Therefore, the digital domain needs to develop a framework of accepted principles – rules of the road, one might say – a baseline of minimum standards.
At present, there are really no legislative or regulatory guidelines that oversee the mining, archiving, and dissemination of data acquired by complex surveillance or data mining systems. There is a federal statute that protects driver license information – the Driver’s Privacy Protection Act of 1994, however, law enforcement agencies are exempt from the restrictions in that law; imagine that!
Discussions around such legislation are on the congressional plate, however. While there has been some sniffing of the air (Congress calling leaders from social media platforms and other high tech giants to testify, for example) there has been mostly just saber-rattling when it comes to action.
This brings on a more global issue – whether the general public is prepared to sacrifice privacy rights in the pursuit of national security. The argument can be made that, in this time and space, with so many bad actors, malfeasants, terrorists and the like, that such means justify the end and privacy is victim of progress. Such arguments have some validity, but are for a different discussion.
Facial recognition is a very intrusive technology from a personal standpoint. Authorities need to find a sweet spot where they are not just scanning for scanning’s sake but have “reasonable suspicion.” And, if the need arises to scan innocents to find potentially bad actors a compromise might be that the authorities can scan to their heart’s delight. However, negative results simply do not get archived. If the scan does not trigger any alarms, the data is immediately discarded.
Another boundary could be that, periodically, the data holder must be required to purge these databases. This could be if no action has been taken on such images after a period of time, for example. Others are calling for bias and accuracy testing, court oversight, minimum photo quality standards, and public audits.
And, do not forget that while AI-assisted facial recognition has come a long way, it is not foolproof. That cannot be overlooked when applying the technology, as well as taken into consideration as guidelines are developed for its use.
Of course, there are occasional examples of agencies seeing the error of their ways. For example, In San Francisco, the city’s Board of Supervisors has made it illegal for authorities to implement facial recognition technologies unless approval has been granted – kind of like needing permission to do an old-fashioned wiretap. The police would have to demonstrate stringent justification, accountability systems and safeguards to privacy rights.
Perhaps now is a good time to recall the old saying, “better a thousand guilty men go free than one innocent be imprisoned.” In an ideal world, there would be no identity mistakes. However, this is not a perfect world and courts are full of cases of mistaken identity. While facial recognition and similar tools can improve the odds of mistakes by a significant margin, they are not foolproof. And, the issue of privacy stands alone.
While this missive seems to stand against the use of facial recognition and other evolving biometric tools, that is really not the case. This missive intends to bring attention to some of the abuses, potential challenges in use cases, and argue that there is a need for supervision of some sort to protect privacy.
Face recognition, and soon other biometric tools can help law enforcement be more accurate and efficient in its tasks. But currently, there is not a good enough understanding of the privacy implications and limitations, and the potential of facial recognition and other biometric platforms. We need to recognize that before we ride off into the biometric Wild West.