In the case of Google Photos now, and probably Facebook in the future, facial recognition will also harvest data from social engagement. For example, if I post that my son Kenny is dressed up for Halloween, it can use that information not only to identify him with a mask on, but also to find him in all the other pictures taken of him at the same event with the mask on, but which were not even posted (just automatically uploaded using the Google Photos automated backup feature).
So now what?
There are three important things to note about all this. The first is that research and development on these artificial-intelligence photo recognition technologies will continue and the systems will become far more advanced. It's important for the educated public to grasp the reality of what's possible now, and what will be possible in the future.
In a nutshell, it's only a matter of time before social networks, law enforcement and other organizations will be able to instantly identify any of us with extremely high probability using any photo, including those taken with webcams, security cameras at ATMs and elsewhere, cameras mounted at toll gates, traffic cams and more. Facial-recognition technology is available on more than 28 million mobile devices and that figure is expected to soar to nearly 123 million by 2024.
The second important thing to remember is that the emergence of this technology is not inevitably related to the implementation or abuse of this technology. There seems to be an assumption that it's inevitable that our privacy will be routinely violated in the future. But that's not necessarily true.
The development of technology that can identify everyone all the time is inevitable. But as we've seen with both Facebook and Google, that technology doesn't have to be used to violate our privacy. Facebook is so concerned about the public's reaction that it's not even using Facebook photos to test its latest recognition technology. Google is so concerned about our reaction that it's not associating faces with identities. Clearly, they're both keenly aware of both public concerns and the potential unintended consequences of using this technology to its full potential -- at least for now.
Apple made it clear at its World Wide Developers Conference that it's possible to offer personalization without privacy violation. The company's new Proactive feature for Siri harvests data from email, calendar and more, but the data never leaves the phone and is never associated with a person's ID. It's not uploaded to the cloud or entered into some permanent database. Apple itself never has access to it.
There is no inevitability that personalization or recognition technology must thoroughly violate our privacy. In fact, many of the current privacy violations that happen through our smartphones and computers could be rolled back. The first step in making that happen is for the public to get more sophisticated about the link between what's possible in terms of features and benefits on the one hand and what's necessary in terms of privacy violations on the other.
Sign up for MIS Asia eNewsletters.