In her latest turn in front of a phalanx of lawmakers, Facebook whistleblower Frances Haugen gave a polished testimony to the European Parliament on Monday — following similar sessions in front of UK and US legislators in recent weeks.
Her core message was the same dire warning she’s sounded on both sides of the Atlantic: Facebook prioritizes profit over safety, choosing to ignore the amplification of toxic content that’s harmful to individuals, societies and democracy. And that regulatory oversight is thus essential to rein in and make such irresponsibly operated platform power accountable — with no time for lawmakers to lose in imposing rules on social media.
The (to date) highest profile Facebook whistleblower got a very warm reception from the European Parliament, where MEPs were universally effusive in thanking her for her time — and what they couched as her “bravery” in raising her concerns publicly — applauding Haugen before she spoke and again at the end of the nearly three hour presentation plus Q&A session.
They questioned her on a range of issues — giving over the largest share of their attention to how incoming pan-EU digital regulations can best deliver effective transparency and accountability on slippery platform giants.
The Digital Services Act (DSA) is front of mind for MEPs as they are considering and voting on amendments to the Commission’s proposal which could seriously reshape the legislation.
Such as a push by some MEPs to get an outright ban on behavioral advertising added to the legislation in favor of privacy-safe alternatives like contextual ads. Or another amendment that’s recently gained some backing — pushing to exempt news media from platform content takedowns.
Turns out Haugen isn’t a fan of either of those potential amendments. But she spoke up in favor of the regulation as a whole.
The general thrust of the DSA is aimed at achieving a trusted and safe online environment — and a number of MEPs speaking during today’s session spied a soapboxing opportunity to toot the EU’s horn for being so advanced as to have a digital regulation not just on the table but advancing rapidly toward adoption slap-bang in the midst of (yet) another Facebook publicity crisis — with the glare of the global spotlight on Haugen speaking to the European Parliament.
The Facebook whistleblower was happy to massage political egos, telling MEPs that she’s “grateful” the EU is taking platform regulation seriously — and suggesting there’s an opportunity for the bloc to set a “global gold standard” with the DSA.
Although she used a similar line in the UK parliament during another evidence session last month, where she talked up domestic online safety legislation in similarly glowing tones.
To MEPs, Haugen repeated her warning to UK lawmakers that Facebook is exceptionally adept at “dancing with data” — impressing on them that they too must not pass naive laws that simply require the tech giant to hand over data about what’s happening on its platform. Rather Facebook must be made to explain any data-sets it hands over, down to the detail of the queries it uses to pull data and generate oversight audits.
Without such a step in legislation, Haugen warned that shiny new EU digital rules will arrive with a massive loophole baked in for Facebook to dance through by serving up selectively self-serving data — running whatever queries it needs to paints the picture to get the tick in the box.
For regulation to be effective on platforms as untrustworthy as Facebook, she suggested it must be multi-tiered, dynamic and take continuous input from a broader ecosystem of civil society organizations and external researchers — to stay on top of emergent harms and ensure the law is actually doing the job intended.
It should also take a broad view of oversight, she urged — providing platform data to a wider circle of external experts than merely just the ‘vetted academics’ of the current DSA proposal in order to really deliver the sought for accountability around AI-fuelled impacts.
“Facebook has shown that they will lie with data,” she told the European Parliament. “I encourage you to put in the DSA; if Facebook gives you data they should have to show you how they got it… It’s really, really important that they should have to disclose the process, the queries, the notebooks they used to pull this data because you can’t trust anything they give you unless you can confirm that.”
Haugen didn’t just sound the alarm; she layered on the flattery, too — telling MEPs that she “strongly believe[s] that Europe has a critical role to play in regulating these platforms because you are a vibrant, linguistically diverse democracy”.
“If you get the DSA right for your linguistically and ethnically diverse, 450 million EU citizens you can create a game-changer for the world — you can force platforms to price in societal risk to their business operations so that the decisions about what products to build and how to build them is not purely based on profit maximization. You can establish systemic rules and standards that address risks while protecting free speech and you can show the world how transparency, oversight and enforcement should work.”
“There’s a deep, deep need to make sure that platforms must disclose what safety systems they have, what languages those safety systems are in and a performance per language — and that’s the kind of thing where you can put in the DSA,” she went on, fleshing out her case for comprehensive disclosure requirements in the DSA. “You can say: You need to be honest with us on is this actually dangerous for a large fraction of Europeans.”
Such an approach would have benefits that scale beyond Europe, per Haugen — by forcing Facebook “towards language-neutral content-neutral solutions” which she argued are needed to tackle harms across all the markets and languages where the platform operates.
The skew in how much of Facebook’s (limited) safety budget gets directed toward English-speaking markets — and/or to the handful of markets where it’s afraid of regulation — is one of the core issues amplified by her leaking of so many internal Facebook documents. And she suggested Europe could help tackle this lack of global equity around how powerful platforms operate (and what they choose to prioritize or de-prioritize) by enforcing context-specific transparency around Facebook’s AI models — requiring not just a general measure of performance but specifics per market; per language; per safety system; even per cohort of heavily targeted users.
Forcing Facebook to address safety as a systemic requirement would not only solve problems the platform causes in markets across Europe but it would “speak up for people who live in fragile places in the world that don’t have as much influence”, she argued, adding: “The places in the world that have the most linguistic diversity are often the most fragile places and they need Europe to step in — because you guys have influence and you can really help them.”
While many of Haugen’s talking points were familiar from her earlier testimony sessions and press interviews, during the Q&A a number of EU lawmakers sought to engage her on whether Facebook’s problem with toxic content amplification might be tackled by an outright ban on microtargeted/behavioral advertising — an active debate in the parliament — so that the adtech giant can no longer use people’s information against them to profit through data-driven manipulation.
On this, Haugen demurred — saying she supports people being able to choose ad targeting (or no ad targeting) themselves, rather than regulators deciding.
Instead of an outright ban she suggested that “specific things and ads… really need to be regulated” — pointing to ad rates as one area she would target for regulation. “Given the current system subsidizes hate — it’s 5x to 10x cheaper to run a political ad that’s hateful than a non-hateful ad — I think you need to have flat rates for ads,” she said on that. “But I also think there should be regulation on targeting ads to specific people.
“I don’t know if you’re aware of this but you can target specific ads to an audience of 100 people. And I’m pretty sure there is being misused because I did an analysis on who is hyper exposed to political ads and unsurprisingly the people who are most exposed are in Washington DC and they are radically overexposed — we’re talking thousands of political ads a month. So I do think having mechanisms to target specific people without their knowledge I think is unacceptable.”
Haugen also argued for a ban on Facebook being able to use third party data sources to enrich the profiles it holds on people for ad targeting purposes.
“With regard to profiling and data retention I think you shouldn’t be allowed to take third party data sources — something Facebook does, they work with credit card companies, other forms — and it makes their ads radically more profitable,” she said, adding: “I think you should have to consent to every time you hook up more data sources. Because I think people would feel really uncomfortable if they knew that Facebook had some of the data they do.”
But on behavioral ad targeting she studiously avoided supporting an outright ban.
It was an interesting wrinkle during the session, given there is momentum on the issue within the EU — including as a result of her own whistleblowing amplifying regional lawmakers’ concerns about Facebook — and Haugen could have helped stoked that (but opted not to).
“With regard to targeted ads, I’m a strong proponent that people should be allowed to make choices with regard to how they are targeted — and I encourage prohibiting dark patterns that force people into opting into those things,” she said during one response, without going into detail on exactly how regulators could draft a law that’s effective against something as cynically multifaceted as ‘dark pattern design‘.
“Platforms should have to be transparent about how they use that data,” was all she offered, before falling back on reiterating: “I’m a big proponent that they should also have to publish policies like do they give flat ad rates for all political ads because you shouldn’t be subsidizing hate in political ads.”
Her argument against banning behavioral ads seemed to boil down to (or rather hinge on) regulators achieving fully comprehensive platform transparency — that’s able to provide an accurate picture of what Facebook (et al) actually does with people’s data — i.e. in order that users can then make a genuine choice over whether they want such targeting or not. So it hinges on full-picture accountability.
Yet during another point in the session — after she had been asked whether children can really consent to data processing by platforms like Facebook — Haugen argued it’s doubtful that adults can (currently) understand what Facebook is doing with their data, let alone kids.
“With regard to can children understand what they’re trading away, I think almost certainly we as adults — we don’t know what we’ve traded away,” she told MEPs. “We don’t know what goes in the algorithms, we don’t know how we’re targeted so the idea that children can given informed consent — I don’t think we give informed consent and they have less capability.”
Given that, her faith that such comprehensive transparency is possible — and will paint a universally comprehensible picture of data-driven manipulation that allows all adults to make a truly informed decision to accept manipulative behavior ads (or not) — looks, well, rather tenuous.
If we follow Haugen’s logic, were the suggested cure of radical transparency to fail — including by regulator/s improperly/inaccurately communicating everything that’s been found to users and/or failing to ensure users are appropriately and universally educated regarding their risks and rights — well the risk is, surely, that data-drive exploitation will continue (just now with a free pass baked into legislation).
Her argument here felt like it lacked coherence. As if her opposition to banning behavioral ads — and, therefore, to tackling one core incentive that’s fuelling social media’s manipulative toxicity — was rather more ideological than logical.
(Certainly it looks like quite the leap of faith in governments around the world being able to scramble into place the kind of high functioning, ‘full-fat’ oversight Haugen suggests is needed — even as, simultaneously, she’s spent weeks impressing on lawmakers that platforms can only be understood as highly context-specific and devilishly data-detailed algorithm machines; Not to mention the sheer scale of the task at hand, even just given Facebook’s “amazing” amounts of data, as she put it in the Q&A today, suggesting that if regulators were handed Facebook data in raw form it would be far too overwhelming for them…)
This is also perhaps exactly the perspective you’d expect from a data scientist, not a rights expert.
(Ditto her quick dismissal of banning behavioral ads is the sort of trigger reaction you’d expect from a platform insider whose expertise comes from having been privy to the blackboxes, and focused on manipulating algorithms and data, vs being outside the machine where the harms flow and are felt.)
At another point during the session Haugen further complicated her advocacy for radical transparency as the sole panacea for social media’s ills — warning against the EU leaving enforcement of such complex matters up to 27 national agencies.
Were the EU to do that she suggested it would doom the DSA to fail. Instead she advised lawmakers to create a central EU bureaucracy to deal with enforcing the highly detailed, layered and dynamic rules she says are needed to wrap Facebook-level platforms — going so far as to suggest that ex-industry algorithm experts like herself might find a “home” there, chipping in to help with their specialist knowledge and “giv[ing] back by contributing to public accountability”.
“The number of formal experts in these things — how the algorithms really work and the consequences of them — there are very, very few in the world. Because you can’t get a Master degree in it, you can’t get a PhD in it, you have to go work for one of these companies and be trained up internally,” she suggested, adding: “I sincerely worry that if you delegate this functionality to 27 Member States you will not be able to get critical mass in any one place.
“It’ll be very, very difficult to get enough experts and distribute them that broadly.”
With so many warnings to lawmakers about the need to nail down devilish details in self-serving data-sets and “fragile” AIs, in order to prevent platforms from simply carrying on pulling the wool over everyone’s eyes, it seems instructive that Haugen should be so opposed to regulators actually choosing to set some simple limits — such as no personal data for ads.
She was also asked directly by MEPs on whether regulators should put limits on what platforms can do with data and/or limits on the inputs it can use for algorithms. Again her preference in response to the questions was for transparency — not limits. (Although elsewhere, and as noted above, she did at least call for a ban on Facebook buying third party data-sets to enrich its ad profiling.)
Ultimately, then, the ideology of the algorithm expert may have a few blind spots when it comes to thinking outside the blackbox for ways to come up with effective regulation for data-driven software machines.
Some hard stops might actually be just what’s needed for democratic societies to wrest back control from data-mining tech giants.
Haugen’s best advocacy may therefore be her highly detailed warnings around the risk of loopholes fatally scuttling digital regulation. She is undoubtedly correct that here the risks are multitudinous.
Earlier in her presentation she raised another possible loophole — pushing lawmakers not to exempt news media content from the DSA (which is another potential amendment MEPs are mulling). “If you’re going to make content neutral rules rules, then they must really be neutral,” she argued. “Nothing is singled out and nothing is exempted.
“Every modern disinformation campaign will exploit news media channels on digital platforms by gaming the system,” she warned. “If the DSA makes it illegal for platforms to address these issues we risk undermining the effectiveness of the law — indeed we may be worse off than today’s situation.”
During the Q&A, Haugen also faced a couple of questions from MEPs on new challenges that will arise for regulators in light of Facebook’s planned pivot to building the so-called ‘metaverse’.
On this she told lawmakers she’s “extremely concerned” — warning of the increased data gathering that could flow from the proliferation of metaverse-feeding sensors in homes and offices.
She also raised concerns that Facebook’s focus on building workplace tools might result in a situation in which opting out is not even an option, given that employees typically have little say over business tools — suggesting people may face a dystopic future choice between Facebook’s ad profiling or being able to earn a living.
Facebook’s fresh focus on “the Metaverse” illustrates what Haugen dubbed a “meta problem” for Facebook — aka: That its preference is “to move on”, rather than stop and fix the problems created by its current technology.
Regulators must throw the levers that force the juggernaut to plot a new, safety-focused course, she added.