In Hansel and Gretel, two children are lured into the candy-coated home of a deceitful witch. In exchange for doing household chores, the witch fattens up our protagonists with a buffet of delicacies so that she could eat them for dinner. Our 21st century rendition stars internet users as Hansel and Gretel, and casts platforms like Facebook as the cannibalistic witch. In this version, Hansel and Gretel discover the wonders of personalized advertising, where they are recommended content for things they actually want to see – like the latest release from an artist they recently discovered on Spotify. The two provide the witch with more of their data, such as their gender, age, birthday, and location, and in exchange, she feeds them with more content to consume – so much so, it’s almost as if she knew exactly what they were craving inside-out. But the story of Hansel and Gretel hints that the witch had her own plans in mind: indeed, she was buttering them up for her own gain.
This is essentially how Facebook’s business model works – and it’s threatening our democracies.
It’s no secret that social media platforms are by and large advertising companies, with 98% of Facebook’s revenue in 2019 coming from advertisements. The concept is fairly straightforward: by providing free services, such as social networking opportunities, platforms are able to collect data and monitor it to predict user behavior. In other words, users give up their privacy and in return they are rewarded with convenience. This transaction is part of what Harvard Business School scholar Shoshana Zuboff calls “surveillance capitalism”. Zuboff argues that “digital technology is separating the citizens in all societies into two groups: the watchers and the watched”; under this system, surveillance capitalism “unilaterally claims human experience as free raw material for translation into behavioral data”, giving rise to “prediction products” that anticipate what users will do “now, soon, and later.”
The ad-tech ecosystem provides the perfect mechanism for tailoring any type of message to one’s audience of choice, and disinformation actors have succeeded in permeating this system. Whether their motivations are rent-seeking or political, these actors, ranging from British politicians to teenagers in suburban Macedonia, have proliferated disinformation at a rate far beyond what print newspapers and cable TV were ever capable of. The Global Disinformation Index (GDI) describes this duality perfectly: “if an advertiser captures the data showing that 15-24 year-old men from Johannesburg are more likely to buy sneakers on Friday nights, ads can be targeted for that time slot. Equally, if a disinformation actor knows that this same group also responds to anti-immigration messaging, they can be targeted at the same time with related conspiracy theories (which may even look like real news stories).”
Facebook profits by amplifying lies and selling their microtargeting technology to mal-intentioned actors. Its business model has exploited our data and manipulated us with personalized ads, some of which are invisible to everyone but us (i.e. dark ads). Our data feeds a machine learning algorithm that is sold to advertisers who specifically target emotional political messages to segmented audiences, otherwise known as filter bubbles, deepening social divides. During her 2019 presidential campaign, Elizabeth Warren intentionally made a Facebook ad with false claims to prove her accusation of Facebook being a “disinformation-for-profit machine.” Warren’s experiment shows how easy it is for any entity to employ micro-targeting strategies to publish false information, as was once again recently proven by Russian state media spreading disinformation about the origins of COVID-19.
While we can now opt out of seeing targeted ads, we can’t opt out of being tracked and included in their increasingly sophisticated targeting algorithms. In 2019, Facebook announced that it would allow politicians in both the UK and US elections to run advertisements with false claims. By using freedom of expression as a justification, Mark Zuckerberg enables disinformation dissemination through both microtargeting and a lack of fact-checking. This laissez-faire approach makes one thing clear: as long as platforms prioritize profit maximization at the expense of human rights and transparency, our choices are never truly ours.
We need to reclaim our rights, and that means taking ownership of our data. We think that we’ve implicitly accepted the trade-off between our data and maximum convenience, but as Zuboff explains, “everything that’s inside that choice has been designed to keep us in ignorance.” By downloading an app, we share the contacts on our phone and grant access to our mic and camera to third-party software. None of this has to do with the functionality of the app, but only serves to enable surveillance capitalism – an implicit agreement we’ve made but were never made aware of.
Facebook’s business model is the problem, and the past decade has shown that self-regulation alone is not the answer. Facebook will go where the money is, and as one of its biggest markets, the EU has strong regulating capacities which can set a precedent for platform regulation worldwide.
To enhance transparency and advance data rights, the EU passed the General Data Protection Regulation (GDPR) in May 2018, a regulation on the processing of personal data. By imposing these obligations and protecting user rights, the GDPR aims to curb data exploitation. In particular, these measures include limiting data collection to pre-specified purposes, restricting the duration a host can store user data, ensuring the accountability of the host, and enabling users to know what their personal data is used for. While the EU’s privacy rules are more advanced than the US’, the GDPR does not have specific regulations on microtargeting. Given its broad applications, many of its rules remain vague and abstract. The ‘catch-all’ conception of the GDPR hence leaves the judgement on its effectiveness in regulating microtargeting to case law until more specific measures are passed.
In the case of political campaigning, micro-targeting is interpreted as a form of political communication and thus protected under freedom of expression. While national laws of member states vary, the European Commission’s Code of Practice on Disinformation aims to enforce self-regulatory solutions for social networks. Nevertheless, the decentralized nature of electoral laws in Europe challenges the idea of a standardized directive: for instance, regulation on microtargeting pales in the Netherlands compared to Germany with its Rundfunkstaatsvertrag. While the EU can enforce broad applications of data protection measures, it does not have the competency to regulate on the integrity of national elections – the core issue area for political advertising. Instead of acting alone, all stakeholders must cooperate in order to protect user integrity.
Facebook has heavily invested in public relations stunts to present itself as a champion of democracy; Zuckerberg has asked for regulation yet has failed to regulate his own platform. However, as long as Facebook relies on microtargeting to finance its business model, disinformation will continue to spread under surveillance capitalism. The link between privacy exploitation and disinformation on social media platforms is the crux of these businesses. The power dynamic between platforms, governments, and civil society needs to change: we need to reclaim our data rights.