The Social Dilemma: We Have Clear Warnings, We Need Real Solutions

By: Lee Rickwood

October 23, 2020

Much of today’s technology is designed to deceive and manipulate us, not to protect us, our privacy or our property.

Into our smartphones and our social media sites and our connected digital gadgetry, product designers have embedded certain techniques and methods and strategies to get us to use their products more even as they protect our valuable assets less.

Several books and reports and now movies look at how and why technology is designed the way it is, and the picture they present is not pretty: rather than having a consistent and overarching concern for user privacy and the protection of personal information, many companies have as a corporate imperative the need to get users to engage more, to disclose more, to connect more.

The tools tech companies have to influence our use of their technology include buttons and links, the timing of alerts and warnings, the shape of screens, the colour of dashboards, and more. Their careful and strategic use of language leads us to follow links, fill out surveys, respond to push notifications or timeline alerts, and otherwise share information in ways that are good for the company but not necessarily good for us.

graphic design shows human face surrounded by digital artifacts

A new documentary called The Social Dilemma is a cinematic blend of fact and fiction warning of the consequences of a powerful social media industry. Netflix image.

Whether it’s the self-esteem of a young high school student being undermined by cyber-bullies; the loneliness and isolation of a senior citizen even with increasing connectivity; the unabashed fealty of modern consumers to social influencers; or the outright manipulation of political views and actions by unknown actors with unseen agendas, our society is facing a social media dilemma.

These are among the concerns and warnings and revelations presented in a new documentary on Netflix called The Social Dilemma. A cinematic blend of fact and fiction, the movie warns of the consequences of the growing and powerful social media industry. Using real-world interviews with Silicon Valley insiders and some made-up scenarios about life inside a big tech company, the filmmakers show how social media is reprogramming civilization by portraying what’s on the other side of our screens.

Among those interviewed, Tristan Harris, a founder of the Center for Humane Technology and a former Google engineer who is very familiar with something called persuasive technology design: it’s how tech developers make use of well-known human psychological and physiological principles to “persuade” or “nudge” or otherwise get consumers to do something. The consumer may not be aware of the nudge, they may not question it, they may accept it as the price of entry. But the provider knows it is a tried and true method to get the consumer to use their products more often, for longer times, bringing greater rewards to the provider.

The new Netflix film has engendered both vocal detractors and advocates, but that core message is well established, having been touched upon by Harris and the folks at CHT, as well as in books like Surveillance Capitalism by Shoshana Zuboff, The Truth About Algorithms by Cathy O’Neill, and in a revealing report called Deceived By Design.

close up of smartphonce screen shows several icons and red buttons

Alerts, notifications, warnings, and flashing red buttons are among the tactics used to nudge a user into using more and feeling good about it. CHT image.

That report says that tech giants (like Facebook, Google, and Microsoft) use design to “nudge users away from privacy-friendly choices.”

It sounds gentle, almost benevolent, that “nudge”.

But it is a powerful behavioural modification concept, so in the 44-page NCC report, the language gets stronger and darker. The authors cite several dark patterns they have detected online, such as privacy-intrusive default settings and misleading words used in a design that gives users “an illusion of control” while hiding privacy-friendly choices and making sure that choosing a privacy-friendly option requires more effort for the user.

The report also says that tech design often moves from a gentle nudge to a more direct threat: if users choose not to accept some privacy policies, they are warned about less relevant content, a loss of functionality, or even the deletion of the user’s account.

But does the deception and manipulation constitute a blatant effort to trick people? Are these examples of someone’s evil intent or malicious desire? Some say the latter proposition is presented in The Social Dilemma to great effect, but blaming individual malfeasance may be missing the real target.

The corporation’s business model is built on accurate targeting of advertisements based on user activity, engagement, sharing, and disclosure. The corporation operates under a  mandate to gain value, a structural imperative to boost profit, which today can mean sell more stuff, but often that means collect more data (and then sell the data, not just your product). Even in the process of seeming to protect data, the corporate mandate to collect data remains in place.

Social media companies make money on data, collected in various ways and often aggregated through the efforts of data brokers and third-party companies that “have every incentive to use the power” … “to engineer your almost near-constant disclosure of information.” Consumers “will be worn down by design; consent is preordained,” says Woodrow Hartzog.

Author Woodrow Hartzog discusses privacy implications caused by the design of technology in an information session hosted by Terrell McSweeny (seated), former Commissioner of the Federal Trade Commission (FTC). Twitter image.

In his new book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies,  Hartzog argues that technologies are designed at a deeply engineered level to undermine user privacy, and as such, design needs to be subjected to improved standards,  regulations, and laws.

Hinting at the possibility of a solution or response, he writes “Design is everywhere; design is power and design is political.”

The techniques used by digital designers to ensure consumer participation are very similar to the ones used by Vegas slot machine manufacturers, writes New York University professor Natasha Dow Shull in her book, Addiction by Design.

The bridge between technology design and addiction, she says, is something called a variable reward. Pull the arm on a slot machine, and you may be an instant winner. You may not. But that is the enticement of a variable reward. It’s often worth trying again and again to get rewarded, particularly if the risk seems minor. And through machine algorithms and interface designs, potential worth and reduced risk is reinforced again and again. The reward may be a cash payout, a points reward, a thumbs-up like on your most recent post or food photo.

The risk is hidden, but it can include the loss of control over personal information, perhaps even a loss of control over the self.

Shull, Hartzog, Harris, and others (in the new documentary and on other platforms) all talk about tech practices like persuasive technology design, and how easily humans are addicted. Medical practitioners, marketers, product developers, and politicians are among those who have long known about ways to take advantage of the fact.

Our bodies release neurochemicals in response to external stimuli: dopamine and oxytocin are among the chemicals we generate as a kind of reward to ourselves for doing, finding, having, or being something good.

As we scan our screens for little bits of information that match our preconceptions and expectations, for example, our bodies reward us when we’re successful with little doses of feel-good chemistry, and our desire for more such little rewards is increased.

Tristan Harris from the Center for Humane Technology speaks about addictive technology at various industry events. CHT Twitter image.

Harris calls the tricks used to exploit these human characteristics “the race to the bottom of the brain stem” and The Social Dilemma tracks that race well by presenting threatening implications, but rather few solutions.

Strengthened consumer protection practices (if not privacy legislation, then anti-trust and anti-monopoly proceedings) should be in place to balance the ever-accumulating cooperative power of social media and big tech.

Of course, consumer awareness, increased education, and provider transparency are crucial.

To that end, what if our favourite gadget or online service came with an info sheet that specifies both the good and the bad outcomes the product may bring? A user manual that carefully describes the moral reckonings made by the developer and manufacturer before releasing their product?

We’re all used to reading product tech specs about processor speeds, screen resolutions, and bandwidth capabilities. How about specs detailing a social media company’s internal security protocols? Or a list of its compatibilities – its work with third-party data aggregators.

It’s one thing to fret over all the ways a social media company might use or misuse our valuable data, but it’s another thing altogether when social media companies have shortcomings protecting themselves!

Case in point: the infamous Twitter Celebrity Hack of last summer – hackers were able to penetrate Twitter’s network and gain access to internal tools allowing them to take over any Twitter user’s account, including those of top cybersecurity firms and well-known celebrities.

Breaking the security protections of a$37 billion technology company apparently took no high-tech, sophisticated knowledge or expertise, making it even more concerning.

Social media platforms have quickly become the leading source of news and information, yet no regulator has adequate oversight of their cybersecurity. The fact that Twitter was vulnerable to an unsophisticated attack shows that self-regulation is not the answer,” said Linda A. Lacewell, the superintendent0 of New York State’s Department of Financial Servies in a report summarizing the findings of an investigation into the Twitter hack.

Shouldn’t tech specs detail how the ethical obligations a product or service provider has to the consumer, to society, to the planet are being met?

Shouldn’t we all be screaming “Yes!” about now?

Some folks already are. Tech industry initiatives like Tech for Good, the Center for Humane Technology, Truth About Tech, and others are tackling the perceived inequities, negative impacts, and outright threats posed by technology today. Among the activities conducted by these organizations, CHT holds regular online forums and conversations on the topics raised in the new documentary.

As CHT sees it, solving the problems addressed in The Social Dilemma and other platforms will take a movement for a new world of humane technology—it will be through pressure from millions of consumers worldwide. It will take a radical shift in how we understand, build, invest in, and assess technology, along with new rules, regulations, and enforceable laws to rein in runaway social media platforms.

The real dilemma is how consumers will respond to the opportunities presented.

 

-30-

 

 


Leave a Reply

Your email address will not be published. Required fields are marked *