How does facial recognition work?
Last week, Forbes quoted me in their article 12 Things You Need To Know About Facial Recognition Technology. I am a member of the Forbes Technology Council, and they regularly check in with us to get our take on what's happening in the technology space.
Racial Bias & Privacy Concerns
A number of the items listed are hot topics: racial bias and privacy concerns. Back in May, the City of San Francisco banned the use of facial recognition technology by government agencies and in Portland, Oregon, the City Council voted to ban the use of facial recognition software by all parties, public or private. Whether these bans will hold up in court is still to be determined, but even framing these tools as “facial recognition” belies how little the general population understands about them.
My contribution to the Forbes article came in at number three and was as follows:
“ The tools being used for facial recognition are much more versatile than people think. Many can also be used for object recognition. This means these algorithms can be trained to recognize and categorize things like cars, trees, purses, colors—really anything you can think of—and do so at a tremendous scale. ”
While all of this is true, I realized that I didn't push the point far enough after being published. These tools are not built for object recognition. They are built for anything recognition. But what do I mean anything recognition? Objects are already a lot, right?
In his playful article AWS Rekognition, Comprehend, and Transcribe, Metal Toad CTO, Tony Rost explores the power of three Cloud-based machine learning tools and what they could discern through multiple viewings of Star Wars: The Force Awakens.
After a few comical misfires, where the algorithm tagged Chewbacca as a beard, the machine got it right and recognized everyone, no matter the view of the person in the scene. With a little more time, Tony could have set it up so it knew what the Millennium Falcon was, when the characters were in outer space, or really anything.
Out of the box, the system tagged the characters and did a pretty good job of tagging sentiment. Were the characters scared? Happy? Sad? The machine learned to be pretty accurate.
For better or for worse, machines are poised for anything recognition. Not just cars, trees, purses, or colors—with a sufficient data set, machines could identify:
- Audience engagement
- Safety violations
- Lost children
- The total cost of an outfit being worn
I imagine a lot of you may be feeling very uncomfortable now, and for good reason. Despite our trepidation, and whatever we may do to prohibit their use in our local jurisdictions (city, county, state, country), these tools will be applied to all kinds of things we can't imagine today.
China or Denmark
Despite what we may think, the United States is sort of middle-of-the-road regarding legislation around facial recognition (and by extension, legislation of anything recognition). Attempting to ban the use of technology is generally both ineffective and foolish. San Francisco and Portland's reactionary bans fall into this category. As an alternative, I'd like to put forth two archetypes: China or Denmark.
I'll start with China because it's easy to define: there are no limits. The government can do anything and everything it wants, and the people have no recourse.
By comparison, Denmark (a massive advocate for GDPR) has not banned collecting data. Still, it has effectively banned the storage of any personal data — email, names, even photos where people are in the background with their backs to the camera — for any real length of time. While you might think this is the same as banning the use of facial recognition software, it's not. Data can be captured, safely anonymized, and then stored. As long as you can't identify people, it's all good. We are doing some work with a company in Denmark, called Claviate, and their founder, Kasper Kratmann, that specializes in this kind of anonymization and storage of data. Under this paradigm, you can determine:
- factory productivity, but not target individual workers
- site safety, but not hand out infractions
This means accountability rests with the organization, not the individual.
I believe this is the right kind of application for anything recognition, and indeed, it can be more effective at organizational optimization.
Good vs. Bad
No matter what we would like to happen, the steady march of new algorithms cannot be stopped. Machines will get more access to data and the insights they glean will change how our world works. Humans are ill-equipped to manage this individually, so we must choose smart legislation which fosters innovation while at the same time protects individual rights. So far, the United States isn't doing very well — but there's still time to work things out.