Over the last decade, brands around the world have embraced various technologies to make big improvements to their customer experience. Our mindset when it comes to technology has always been “What can we do?”, but perhaps we should instead be asking, “What should we do?”
I work on customer experience with some of the world’s top brands, and I am a big advocate of technology, but technology is increasingly moving into the realms of human rights and could have a huge impact on our personal wellbeing and the future of society, so I suggest we should think about the following 5 ethical questions:
- Just how much power should individual companies be allowed?
History shows that technological revolutions create pockets of power, and there is usually trouble finding the right balance between companies, consumers and governments. Governments have been breaking up businesses they see as illegal monopolies over a hundred years, because even back then, these companies had their own impact on the elections, the economy and the environment.
Today, allowing tech giants to control so much data has a huge impact on the end-game of customer experience, so where do we draw the line to their power? Can they convince us what products to buy and what clothes to wear? Should that mean they can also influence how we vote?
It is a question that gets increasingly difficult to answer with every year, and laggard governments are struggling to keep pace with developments. So, perhaps the more important question should be who draws the line? Will it be the governments or self-regulating systems within the tech companies? Is it time for full algorithmic transparency, and will consumers demand power over their own data?
- Who do we allow to use our data?
When we use so many ‘free’ services today, we are really paying with our data. But consumers are starting to reassess whether this is a good trade-off.
Technologies such as facial recognition could help create incredible customer experiences, but would you want a medical insurer to demand a higher fee because they used the technology to identify a high blood pressure issue? Or what happens when political organisations start using the technology to monitor and manipulate our behaviour?
On a more extreme level, what happens when effective brain-computer-interfaces emerge, such as Elon Musk’s Neuralink project? Reports suggest these technologies could help individuals walk or see again, but could also make us more aggressive. And who will actually own the data that comes from our brains?
- Should tech control what we feel?
I love technology. It has created a level of convenience that has given me, and millions of other people, more time to spend with friends and family. However, as technology is advancing, it is also getting better at influencing complex issues like our emotions.
Social media, for instance, is purposefully designed to be addictive, and the links between social media and anxiety, depression, self-harm and even suicide are well documented. Amazon recently unveiled Halo, as a competitor to Apple Watch and Fitbit, which goes beyond tracking health to analyse the user’s voice and present a picture of how they felt.
Now these developments could be used for good, but is it ethical to use emotions of consumers? Well, according to Gartner, AI identification of emotions will influence more than half of the online advertisements you see by 2024.
- Do algorithms make ‘good’ decisions?
We have already reached a point where we trust algorithms to make decisions on our behalf. But should we trust them to make the right decisions?
The emergence of automated buying will be an interesting development in our trust for algorithms. A smart fridge might decide we need more milk, and that is great, but what about wine? If it senses the white wine bottles are emptying faster, should it anticipate that and buy more of them? Or maybe it should help us by ordering less?
What is “good” for us in the short term – buying chocolate because we feel sad – might not be the best decision in the long term, and do you really trust an algorithm that fully?
- Will algorithms just increase inequality?
Algorithms can only judge the data that they are fed. As the majority of people working in tech are WEIRD (Western, Educated, Industrialized, Rich and Democratic) most the data they use is also exceedingly biased. We have already seen that facial recognition technology works best on white male faces, and we’ve seen issues where algorithms developed to predict the likeliness of a criminal reoffending misclassified black defendants more than their white counterparts.
Technology has always had a tendency to magnify existing trends. And so, when it comes to the ‘broken’ parts of our society – that lie at the roots of sexism, racism, ageism and a lot of other biases – it unsurprisingly follows the same dynamic, because that is what it finds in our data. This should be high upon the agenda of every company that is investing in customer analytics: how can we make sure that our AI systems do not further boost the existing inequality?