How AI is making humans more (not less) valuable

By Lambert Hogenhout

At times it may seem that AI and robots are poised to make us humans obsolete. But the opposite is happening: every advance in technology makes humans more relevant. To be precise, the value is in our data and the content we produce. 

We have always been offered small rewards to share our data. In the 1980s supermarkets were giving out customer loyalty cards that gave us discounts on some products in return for insights in our (aggregate) trends in our shopping behavior. In the 1990s we got free email accounts, the providers of which may (supposedly) not disclose our personal emails to anyone but can use the text it contains to serve you better ads, or as part of bulk training data for AI systems. 

A free social network account in return for sharing everything about your life and your relationships to others. A free copy of a fact sheet or a report from a consulting company, in return for our email address, name, title and company.

Sometimes the data requested is small, seemingly insignificant: an online website may offer you a 5% discount on the purchase you are about to make if you enter your email address to sign up for their mailing list. But the value of your data is clearly worth it to these companies.

(More than a) penny for your thoughts

The recent advances in artificial intelligence are taking things to a new level of analysis – and of value extracted from your data. Headsets and other devices used for VR/AR (or “spatial computing” if you prefer) open opportunities to collect even far more intimate data. The inward facing cameras in some of these devices can analyze our facial expressions and track eye movements (how many milliseconds did your eyes pause to look at a particular ad in a virtual world?). They may know about our behavior and desires than we (consciously) do.

With these increasingly advanced techniques to collect and analyze our data, and turn it into profits, it is clear that our data has become more valuable overtime. This is true for both collective human data (the vast amounts of content we produce in articles, blogs and vlogs, Wikipedia, etc.) and individual or personal data. 

Logically, one would assume that this inflationary effect is also reflected in bigger rewards for sharing our data. However, the rewards have not gone up. We are still getting the same email account, social media account, and other small rewards for sharing our data. And these services we get as a reward often cost less to run as compute power and storage decrease in cost. In fact, the price of some services that collect a lot of our valuable data, like movie streaming services, is rapidly going up. So much for bigger rewards. 

Unfair deal

We must conclude that we are getting an increasingly bad deal. How bad? I don't know. There are people who scrutinize the credit card offers or study frequent flyer programs of different airlines to find out who offers the best deal. I wish someone would do the same for the free services that we get in return for an - often not very clearly disclosed - amount of our personal data. Which ones are fair deals, and which are the worst ones?

As our data becomes more valuable, it is also more at risk. There are more ways to leverage our data, and more incentive to do so. Whereas in the past the implications of sharing your data were perhaps more foreseeable (spam email, or telemarketing calls), these days it is far less clear what those implications are. As individuals, we need to become much more aware of the risks for our data privacy. I would argue that classes on data privacy should be part of every high school curriculum.

Regulating the market

Regulation can help. After the adoption of GDPR, new data privacy laws are springing up in many countries. The weakness is that many of these laws center around consent, and consent is often easily obtained in a multipage “Terms and Conditions” that hardly anyone reads before clicking the “Agree” button.  Another weakness of general consent is that it may not take into account the context in which data may later be used (see Prof. Helen Nissenbaum’s work on contextual privacy).

The processing of such data by AI systems too is beginning to be regulated: the recently adopted AI Act in the EU stipulates requirements for the processing of special categories of data. It is not always easy to control in what ways AI systems leverage certain data, but the categorization of systems in various risk categories is at least a help step.

At the same time, companies would do well to become more conscious about the effect of data exchanges on their relationship with their customers. The times that one could “trick” customers into sharing their personal data and assume no one would care or take notice, are probably over. Citizens are increasingly concerned about their online data privacy. If a purchase in an online store leads to targeted marketing emails or advertisements, it may result in negative reviews and resentment. Of course, companies need certain data to operate and to remain competitive, but they should realize that they must maintain a careful balance. Being upfront and transparent to customers about the use of personal data is a good start.

There are also good technological solutions. For example, Solid is a promising architecture that gives us control over our personal data. It involves “pods” that contain our data, are controlled by yourself, and can be hosted by a provider of your choice. Many large tech companies are already participating in the discussion and a strong demand from the public could convince more companies to adopt this standard. 

But it begins with people realizing that advances in technology makes them increasingly valuable as individuals and they should bargain much harder with their data.

SIGN UP FOR THE DSS PLAY WEEKLY NEWSLETTER
Get the latest data science news and resources every Friday right to your inbox!