More than just a brewing controversy, Facebook is mired in a developing scandal that casts a bright spotlight on how precarious privacy and personal data truly are on the world’s largest social network.
The gist of the current problem is that Facebook facilitated Cambridge Analytica, a British data mining firm, to harvest the data of 50 million of the social network’s users without their knowledge. Whistleblowers for both companies have alleged that information was purposely manipulated by the data mining firm to influence the 2016 U.S. presidential election and U.K. Brexit referendum.
What is unclear is whether Facebook was complicit. Did company executives know user data was being used to manipulate how information was delivered based on user preferences?
Ties that bind
Their mutual ties stem from a quiz Facebook released in 2014 inviting users to discover their personality type. The quiz was developed by Cambridge University academic Aleksandr Kogan, but not on behalf of Cambridge Analytica, which had no ties to the university.
At the time, apps and games on Facebook were able to not only harvest data from the quiz’s participating users, but also all their friends too. About 270,000 people took the quiz, and B.C.-native Christopher Wylie, who worked with Cambridge Analytica (and was a co-founder, reportedly), claims the reach extended to about 50 million users, mostly based in the United States.
He alleges that data was harvested without users’ consent, and that Cambridge Analytica used it to target certain users and funnel pro-Trump content to them. This would have contravened Facebook’s own move to limit how much data third-parties could access in the first place. Despite being acknowledged for its role in helping the Trump campaign, Cambridge Analytica is arguing it did nothing wrong because it never used the data.
This leads to the questions of authorization and fulfillment. Since Facebook had eventually tightened restrictions on third-party data access, it would have required those third-parties to delete the information they acquired. Cambridge Analytica has maintained that it complied, though Facebook has yet to verify whether it did or not. Wylie has gone on record saying the data was always accessible.
The data mining firm suspended its CEO, Alexander Nix, after a video surfaced of him explaining questionable practices to a man posing as a prospective Sri Lankan political candidate. Nix’s assertions included veiled references to prostitution, blackmail and smears to help influence public opinion in support of whomever contracted their services.
Talking the talk
Facebook was already under fire for its lack of action on fake stories populating the site and user’s feeds that have been blamed on troll farms, primarily in Russia.
For his part, CEO Mark Zuckerberg can’t seem to figure out what Facebook should be. Well, at least publicly. After refuting any notion that his company was a news platform in the fallout of Donald Trump’s election in the U.S., he vaulted toward the safer space of the social network’s original premise of sharing “personal moments” with friends and family.
The company issued a statement today that fell short of apologizing, and offered little detail in what might have actually happened. Government officials have been weary going back to the U.S. election debacle, which is why there are a number of requests for Zuckerberg and COO Sheryl Sandberg to appear before respective committees.
The U.S. Federal Trade Commission (FTC), along with other congressional committees, have already scheduled hearings. The U.K. and European Union are planning to do the same.
In Canada, the privacy watchdog launched an investigation to confirm whether or not Facebook violated the Personal Information Protection and Electronic Documents Act (PIPEDA), the country’s law legislating data privacy for the private sector.
The irony here is that for a company that speaks openly about “bringing the world closer together” and “transparency,” Facebook is noticeably vague and cryptic in how it treats its most valuable commodity: user data. Now it will have to explain what it’s been doing to lawmakers and regulators.
Defining role
It should be common knowledge by now that users represent a massive pool of eyeballs for advertisers to connect with on the social network. The contentious issue that usually comes up with privacy breaches is the tracking associated with targeted ads.
The current scandal has less to do with ads than it does editorial content, but the central focus is really about personal data in both forms. Targeted ads should be an expectation when using any online tool that can track your preferences. That’s not an endorsement, simply a statement of fact.
And it’s not specific to Facebook. Google does it all the time. As do many other companies, even if it’s at a smaller scale. Ad blockers and VPN services have largely marketed their wares on anonymizing your digital footprints to get away from leaving tracks. A 2014 article in Time Magazine outlined seven ways that Facebook had used data.
Facebook’s own data policy outlines how it uses the data you input into the site. The crux of this is that it’s using what is voluntarily provided to the social network. Tell it your age, workplace, interests, where you live, friends and relationship status and those all become data points that advertisers can hone in on.
A detailed how-to from Quartz Media explains how to check which apps have access to your Facebook account and how to remove them. Another post from The Verge covers how to use the social network while exposing as little personal data as possible — or to remove it entirely.
User engagement
A book called Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley by Antonio Garcia Martinez, a former Facebook software engineer, details some of the targeted ad development.
While critical of Facebook in subsequent interviews, he’s more matter-of-fact about the role ads generally play on social networks.
“No one cares about privacy in the deep way that they’re not going to use Facebook because of it,” said Martinez, in an interview on The David Pakman Show, an online independent media outlet. “Clearly, it’s empirically not the case, what with 2.3 billion users and growing. Every tech journalist is, like, legally required to write at least one Facebook privacy story a year or they lose their licence, or whatever.”
He was being tongue-in-cheek on the latter portion of his comment, but his point is well taken. That kind of a worldwide footprint smacks in the face of any widespread exodus over privacy concerns. It’s also true that anything remotely scandalous for the company always drums up our journalistic impulses, me included.
But that interview was also prior to the ongoing fiasco unfolding with Cambridge Analytica. The outrage has been loudest among those in government and the media, but what about the masses? The company’s stock price may be falling, but will users start to abandon the network and pull their data from it?
Usage may have already peaked in North America. Its daily active user base fell in Q4 2017 to 184 million in Canada and the U.S. It’s only a 1 million drop, but it earned $26.76 (U.S.) per user in the same quarter, which is a 35% increase from the same quarter in 2016. Facebook made more money with fewer users, and it could do so again.
Moving forward
No matter what the total fallout is from this situation, Facebook is inching closer to being regulated. Zuckerberg, Sandberg and co. will have to provide some of the transparency they expect from their users.
In turn, users will need to do their part to stay on top of what they’re putting out there. Facebook may not be so forthcoming until regulators force their hand.