My wife and I recently removed all images of our children from Instagram. Like most people, I don’t trust Facebook, Instagram’s parent company very much, these days.
This fact isn’t so remarkable in and of itself, but it begs the question of why. Of course, there’s the oft-cited Cambridge Analytica scandal, but across history, brands have had various scandals that touched their users and managed to emerge relatively unscathed.
So then why do I, like so many other people, have a deeply ingrained trust deficit with Facebook and, more broadly, big tech? And is the cause of this something more serious that other brands should be observing and planning for?
This isn’t a product issue per se. On the surface, Facebook is a great consumer product. It offers a host of services, largely free, that connect us with our nearest and dearest, keeping us in contact in a way that would have been unimaginable before it existed. Sounds great, right? Yet people don’t like Facebook. Indeed, the company has a serious trust issue. A 2018 Trust Index of U.S. adults by Jebbit found that Facebook had the lowest consumer trust score (3.1) of any surveyed brand. How a company that offers such a great, valuable product could come to be disliked and distrusted so strongly speaks to the changing nature of trust in the data-driven internet era.
There are two issues at play here. The first is the lack of understanding that consumers have of just how much data is being collected about them and how deeply this is mined to synthesize incredibly personal insight. The lesson that Cambridge Analytica should have taught us is not simply that elections can be manipulated, but that we can be simultaneously susceptible to deep suggestions and unaware that it’s happening. This is covert mass manipulation.
Allowing any company to accumulate a pattern of your child’s behavior or facial characteristics from birth to early adulthood is a treasure trove of data.
The second is a lack of understanding as to how this data may be used in years to come. The information we expose about ourselves or our children may not seem relevant today, but allowing any company to accumulate a pattern of your child’s behavior or facial characteristics from birth to early adulthood is a treasure trove of data that, in decades to come will be mined, analyzed and exploited in ways even engineers have not yet considered today. This is the risk. You’re placing your data (and faith) in a future state of technology driven by process automation, machine learning and artificial intelligence that no one yet quite has a grasp on.
Here’s a thought experiment, none of which is beyond even current technology. Suppose you have a public Instagram feed with photos of your children posted over several years. As a young adult, your child applies for health insurance. In this future universe, the systems that exist within the insurer’s actuarial armory have already scraped the photos from their childhood and noted an excessive amount of time in bright sunlight, and using skin pattern scanning, note some blemishes that may be early indicators of skin cancer. They’re denied insurance or even a human review.
The technology I’ve described above sounds frightening and sci-fi-like, but many of the technologies outlined here exist today with varying degrees of accuracy. Our images are regularly scraped, indexed and searched by systems, and various algorithms can be run on these. This is for data you can naturally see, notwithstanding the vast quantities of data you create without perhaps realization, such as behavioral traits, interests and physical location, all of which can be used to triangulate a detailed understanding of your personality, habits, disposition and socioeconomic status.
Consider as an individual, a parent or a company, how are you managing the data you create?