I was very grateful for the opportunity to talk at TEDxReading 2017 on the topic of social media filter bubbles and their effect on how we form our opinions. It was a fantastic day with some fascinating speakers. The video of the talk is available here and the transcript is below.
2015 marked the year that most people received their news from Facebook rather than Google. This seemingly innocuous shift in how we get our information signals the first time content has been pushed to us; rather than pulled – by that, I mean we have become reliant on our friends, our family and our social networks’ algorithms to choose what content to show us and when; rather than our previous proactive attempts at seeking out information and news on a particular event or topic.
It is also no coincidence that the political landscape since 2015 has seen some tremendous upheaval – Brexit, Donald Trump and even the recent General Election – and it is more important than ever that we must be aware of the technology that sits behind this shift in how we consume our information, as well as the ramifications it has on how we form opinions and act more broadly as citizens and society as a whole.
History has taught us that technology has a strong cultural impact on how we form opinions. In the late-40s/early-50s, American TV went through its “golden age” when it was dominated by three channels and ushered in the age of “mass media”. Certain programmes were watched by upwards of 80% of TV households in America! So what early TV lacked in diversity, it promoted in cohesiveness via the proverbial “water cooler” and ushered an age of cultural homogeneity as people discussed these shows together.
It was only when cable came along, whose channels catered for a niche rather than a nation did we start to see this fracturing. Niche audiences could be targeted more appropriately by advertisers – and driven by this commercial incentive, you saw an explosion in the number of channels catering to increasingly narrow audiences.
While this diversification promoted a broader set of views, social commentators of the time lamented the return to a more fractured social order, as fringe views and ever-narrower political or cultural demographic audiences were catered for. In essence, people were finding fewer and fewer topics to discuss around that same water cooler.
This fragmentation from the era of mass media was only the first step. Once the Internet came along, it gave everyone a voice. As it was borne from academia and governmental grants, rather than commercial interests, the Internet quickly led to the greatest information explosion in human history.
Now, that explosion in information has forced us to develop increasingly sophisticated ways to filter all of that content. This is nothing new; large Internet-based companies quickly realised they could use data to help consumers find, watch and purchase content. Amazon and Netflix use your previous history (and that of others) to recommend similar things for you to watch or purchase. Google uses literally hundreds of data points (such as location, gender and search history) to personalise search results in order to provide you with the information you “want” as soon as possible.
Yet it was the vast explosion of social media platforms such as Facebook, Instagram and Twitter that catalysed a tectonic shift in how we consume that content. Early platforms relied on a simple, chronological ordering of information, status updates, shared articles, blog posts and pictures.
However, with billions of users and trillions of connections, networks such as Facebook started to look at ways to organise and rank this information to be more meaningful. They initially did this in a very, very simple way – the Like Button! It was a way for us to show our approval on status updates and content; but the genius behind the Like button was that it allowed Facebook to deepen its understanding on how its audiences engaged with the information being shared. Content could now be ranked by popularity and, more importantly, future content could be ranked by individual relevance by looking at what we’d previously liked.
So by filtering the information you were seeing in a way that felt relevant, these companies could more effectively monetise you and your activities.
Now this seems like a win-win situation, consumers receive content they know they’ll find interesting in a Ptolemaic fashion – putting them in the centre of their own information universe – whilst allowing the social networks to build a highly accurate profile on their billions of users to sell hyper-targeted advertising.
For every piece of content I am shown, what am I not shown? Of course, this can be totally innocuous – for every cat video I watch, I might miss out on another meme-based video. Or for every picture of my little niece and nephew I like, I might miss out on a similar update from a friend. However, what happens when someone shares news items, current affairs, political commentary, op-eds or research papers? If I’ve historically only clicked on liberally biased political commentary or current affairs in the Western hemisphere; or articles about climate change denial or immigration issues, I am considerably more likely to be shown similar articles as they’d be deemed more engaging by the underlying algorithms. This is not because Facebook’s algorithm has any innate understanding of a conservative or liberal bias in content but because it has noticed that I’ve previously engaged with (or “liked”) similar articles; such as content by the same publisher or articles that were also read by my (similarly-minded) friends.
So as our content becomes more tailored, we need to be acutely aware of this social media “echo chamber”. Traditional mass media platforms such as TV and cable needed to produce content to cater for a broad set of opinions and viewpoints; even within the narrower demographics within cable. However, counterintuitively, on the Internet – arguably the largest mass media platform ever created – these filtering algorithms are allowing this idea of an “audience of one” – no matter what the content, algorithms will find traditionally disparate or disconnected people to showcase it to.
Again, in and of itself, this is no bad thing – giving everyone a voice on an equal footing is a very powerful tool for free speech – but when we are only shown content that self-reinforces our previously held beliefs, with no dissent, no alternatives, no serendipitous stumbling across new and innovative ideas, we start to live in our own little bubble.
Eli Pariser was a vocal opponent of what he termed the ‘filter bubble’ which he described as “a self-reinforcing pattern of narrowing exposure that reduces user creativity, learning and connection.” He argued that humans’ ability to synthesise and simplify new information is the root of our intelligence and that these algorithms trap us in environments of our own creation.
Indeed, this problem might get worse before it gets better. We’re on the cusp of what’s being called the “Fourth Industrial Revolution” – a term used to describe artificial intelligence and other advanced technologies being incorporated into our day-to-day technology.
Artificially intelligent filtering algorithms may augment current technology by actually understanding the content itself – by reading an article or watching a video, these technologies will be able to further filter the information that’s presented to us. Again, this is a double-edged sword! AI will do a better job at showing us the information we’d find interesting; but at the same time it was also do a better job at reinforcing that filter bubble we are putting up around ourselves.
And as these filtering algorithms and artificial intelligence become more mainstream, we’ll need to understand how their inherent amorality affects our own moral views.
A very simple example of this is to go on to Google’s Image Search and type “beautiful skin” – it will become immediately apparent how heavy the racial bias toward young, white women. This doesn’t prove any racial bias within the algorithm itself but it does surface a latent, historical trend in our own society.
Similarly, Facebook themselves showed how their algorithms could be tweaked to affect how people thought. In a very interested (but ethically dubious!) piece of research published in 2014, Facebook researchers tweaked the newsfeed algorithm for about 700,000 users to show more or less “emotional” content. It found that when they reduced positive emotional content, users would post less positive content of their own and vice versa! They dubbed this the “emotional contagion”.
So here we have billions of people receiving the majority of their information from amoral algorithms that pick and choose what they deem relevant so that we are better consumers for advertising. However, we also know these algorithms can have both an emotional effect and also hinder our ability to make informed opinions because we’re accidentally self-selecting the information that’s shown to us!
Even after all of this, many people may be asking — So what? Is this not just the new normal?
Well, I spoke earlier about “cultural homogeneity” and “mass media”. These are not ideas to necessarily aspire to, but they do have the power (for better or worse) to unify a nation. Democracy, innovation and creativity are dependent on a diverse (and often dissenting) set of opinions and content to inspire conversation, stimulate debate and set thinking off in tangential directions. In an age where we consume the majority of our content from social media, we cannot live in a world where the content either acts as intellectual junk food or has been curated from an ever-narrower, self-selection as our filter bubbles close in around us.
So where does this leave us? Well, it isn’t a problem per se; certainly not a problem that can be fixed by a single group of people or by yet more technology. Facebook has certainly taken steps to address concerns – during the recent UK and French elections, it introduced a feature called Perspectives. If you saw a post about the election, a prompt would appear allowing you to compare major political parties and their stances on a variety of issues, from housing to immigration to fiscal policy. The idea was to broaden people’s exposure to different political opinions as well as combating political propaganda (or that woefully misused term, “fake news”!).
I’m a technologist at heart and as such firmly believe that despite all this, technology, artificial intelligence and social media all remain forces for good; they have a net positive benefit if we treat them maturely and are cognisant of their wider societal impacts.
The important word here is “we” – so, as a consumer:
I need to be conscious of the technology that I use and the news sources of information I consume and the articles that I share. I must try to ensure I am open to (and even follow!) dissenting opinions.
My friends need to be aware of the same, as they’re my primary source of information.
You also need to aware of these changes. As we’re all based in Reading, there’s a high probability we are all intertwined on social media networks via mutual friends and as such remain tertiary influences on every other person in here.
The social media networks, their engineers and their data scientists need to be acutely aware of the effects that their filtering algorithms will have on the populace, particularly as artificial intelligence advances and direct human intervention becomes less and less prevalent.
And as citizens:
Educators and academics need to prepare the next generation of children and students to embrace technology with a maturity that those of us who lived through the information revolution were ill-afforded.
Indeed, governments, policy makers and potentially regulators need to be sufficiently informed and understand how these issues will soon impact every facet of citizens’ lives.
But as information becomes the life blood our lives, our economies and our governments, it needs all of us to ensure the Internet, that place that gives us all a voice, does not have its information flow controlled by a handful of powerful technology companies that use proprietary and opaque algorithms. Advocating transparency and accountability must start with each of us.
So this is really a rallying call to make sure we all understand how technology is quietly (and often inadvertently) affecting our thoughts and opinions. It is hopefully an eye-opener and I hope you will all go on to open others’ eyes in a similar fashion.
The irony isn’t lost on me that the easiest way to do that would be to share this talk on Facebook!