The Social Dilemma is a Social Catastrophe

Picture of Judy Shapiro

Judy Shapiro

Editor-in-Chief at The Trust Web Times
Picture of Judy Shapiro

Judy Shapiro

Editor-in-Chief at The Trust Web Times

By: Craig Danuloff (@cdanuloff)

(Guest post by Craig Danuloff. Full bio below)

The Social Dilemma is a terrifying documentary about the toxic combination of social media and surveillance capitalism and how together they’re harming our lives and our society. 

It’s the story of how the tech giants discovered they could optimize addiction by leveraging techniques of psychological manipulation, in order to earn billions. It’s told by some of the very people who created and scaled these companies – leaders from Facebook, Google, Twitter, and others.

None of them set out with bad intentions, they were just trying to build popular products and make money along the way. But as the movie shows things got out of control, and the unintended consequences have been devastating; in the words of these tech executives themselves we are heading towards “the destruction of civilization, the end of democracy, and a checkmate on humanity”.

ADDICTION FOR PROFIT

The interviews with these tech execs, as well as social scientists, educators, and others, are set against a fictional story of a family who we see living a screen-time based life that most will find familiar. To them, and even to us watching, the importance and dependence they place on their online interactions seems unfortunate, but certainly not tragic. 

But the film is incredibly effective in taking us on a tour of the addictive tricks the social media companies use, and the destructive way surveillance capitalism pays for and justifies the resulting carnage.

None of this will be new for those who’ve been paying attention, but as big tech’s magic tricks are revealed the filmmakers both zoom in and slow them down in a way that makes it much easier to understand and much more repulsive to contemplate. 

We all know that these apps get us to login and scroll by exploiting our psychology, and that the money is made by selling our eyeballs to advertisers. But the movie drills down effectively on how and why those practices are more advanced and insidious than most understand: 

  • Technologist Jaron Lanier clarifies that advertisers aren’t paying to show you ads, but rather for “imperceptible changes in your behavior.” The targeting we hear about is only the precursor to the main event; getting you to think differently and take action. 
  • Professor Shoshana Zuboff highlights that because these companies are getting paid for results, what they’re really selling is your future behavior. This gives them a stake in both predicting and manipulating that behavior, and the better they get at both the greater the harm they can inflict.
  • Investor Roger McNamee points out that we, as humans, are pitted against an algorithm, and that is not a fair fight. They have massive computing power (which has increased a trillion-fold in just a few decades) and machine learning that turns our willpower into a blade of grass hoping to fend off a bulldozer.
  • Ex-Googler Tristan Harris laments that while we’ve been worried about when computers will exceed human strengths (and therefore potentially subjugate us,) they have in some cases already overtaken human weaknesses, which gives them the power to harm us at will right now.

The aggregate case that’s made, quite convincingly, is that the biggest of these firms have effectively built voodoo dolls of us, based on the information we’ve given them or allowed them to collect. They run predictive algorithms on our data to tell them what we are likely to do next, and find groups of people who are similar to us (cohorts) that help them sharpen these predictions.

These companies know how we’re likely to react to various stimuli, so they increasingly manipulate us via our feeds. If they don’t know, they run small tests and find out what we’re likely to do, then execute at scale.

They know how to get us to visit them, then get us to stay, then get us to act, and even how to make us feel – with subtlety and increasing precision. We’re largely unaware we’re being manipulated and are generally without defenses. We don’t even know they’ve done it even after it’s happened.

But the problem isn’t that they want more of us to use their products more often – that’s only an intermediate goal.

The problem is that they have a unique set of tools (human relationships, cognitive models, powerful algorithms, endless content, etc.) to do that incredibly effectively (it could be considered asymmetrical psychological warfare) and they’re optimizing for their profit based on getting us to take actions regardless of the impact on us of taking those actions.

Their role (as practiced today) is to find the products or ideas we’re best suited for and to sell our attention to whoever is offering those products. Or worse, to shape us so that our minds will be receptive to new products or ideas where they have buyers willing to pay for that attention or take those actions. And they do so with absolute disregard for our well-being. In the movie when they get a teenager hooked into some political cult, they sell his attention to a gun manufacturer – but he would have never been interested in guns had they not first introduced him to that brand of politics.

WHO IS GETTING HURT?

The impact all of this is having on us is bad both individually and worse as a society.

A movie script word-cloud would reveal that addiction, alienation, assault (on democracy) cyber-attacks, destroy, dysmorphia, dystopia, erosion (of social fabric), existential threat, fake news, harassment, outrage, polarization, self-harm, surveillance capitalism, survival, loneliness, manipulation, and radicalization all figure prominently. This is how the people who created these platforms, the people who study them, and those who clean up after, see the impacts.

The fact that we’re allowing this to be done to children is particularly chilling; the stats on teen and pre-teen self-harm (up 62% and189% respectively) and suicide (up 70% and 151%) alone should be enough to earn social media (if not all web use) the same age-restricted limits as driving – if not voting, drinking, and joining the armed forces – in a world with any common sense at all.

For adults and society as a whole the most severe and widespread damage occurs when political issues are run through the machine. If all this were just selling books and vacations, we might be wasting some time and money, but most of the depressing words used above likely wouldn’t come into play.

But when addiction and manipulation are used to sell ideas the stakes rise and the game gets rougher. It’s no longer just about using a little personalization to optimize sales, it becomes a path that starts with personalization then gets pulled to radicalization to falsification to antagonism. It works like this:

The platforms have a natural desire to maximize your use (as does any business) so they personalize what you see. This is where they leverage what they know about you, and people similar to you, to give you a feed full of things that you like and agree with, and over time they hide and avoid anything they think you may dislike or disagree with. It’s called a filter-bubble and we all live in them; our news, search results, entertainment choices, ads, offers, and the opinions we see on most of the sites we use are all ‘personalized’ in this way. 

The good part of a filter bubble is that it’s safe and feels comfortable. It reassures us that we’ve made good choices because it seems like ‘the world’ agrees with our tastes and our opinions. We like it so we come back often. We engage with the content because it aligns with our existing goals and models. It’s very hard to keep in mind that we’re seeing a view optimized for an audience of one.

The bad part is that life in a filter bubble inevitably strengthens the thoughts or beliefs you already had, often supercharging them because all you see are variations and extensions. You may even get ‘pulled down rabbit holes’ where more and more content that extends the ideas a little further and a little further are presented to you. This kind of increasingly extreme but compelling content tends to be very engaging and can quickly lead to some level of radicalization. 

It’s worth noting that the platforms don’t actually intend to lead you in any particular direction, they’re just taking advantage of the reality that more intense content is generally more effective at keeping your attention and earning them more time and more clicks. YouTube is the undisputed king of this technique, with well documented horrific ramifications.

Particularly when there are political interests involved, falsified content (deliberately or unintentionally) often creeps into the filter bubbles or defines rabbit holes. The platforms rarely prevent this because they really don’t take responsibility for the verification or confirmation of anything. As we’ve seen before, false content when constantly reinforced in a narrow and intensified experience tends to breed extremism. And this often leads to antagonism towards both contrary ideas and those who hold them. 

Apply this formula to enough issues on a broad enough scale, and you arrive in a world that lacks the kind of shared reality necessary for a society to grapple with issues, make decisions, move forward, and remain cohesive. Sound familiar? 

WHERE IS THE SOLUTION?

The fact that these companies, technologies, techniques, and impacts were all created from scratch and reformed our world and most of the people in it in just twenty or so years is astounding. But because it was, we get the chance to listen to the very people who created and managed it all, and see and hear the genuine surprise in their faces and voices that this is how it turned out.

Nobody intended to build reaction loops that would lead teenagers to harm themselves. Nobody conceived of the possibility of ideological marketers co-opting the system and filling it with false content. Nobody knew that building optimization routines to make a few more bucks would become this powerful or have these broad societal impacts. They are genuinely shocked and appalled at what has resulted.

At least that’s true of those who chose to participate in this film. And undoubtedly of many but not all others. That doesn’t exempt them from responsibility, and clearly those in the film feel the weight of that. Did they reach these conclusions soon enough? Where are the morals of the people still driving these companies full speed towards the cliff? Many questions remain.

Where the film disappoints is in offering solutions. What changes are needed now? How can they be made? Who should or will make them? These questions and others are left almost entirely unanswered by both the key players on screen and the film makers. They spend an hour and forty minutes making a devastatingly thorough and complex case for a massive societal problem and then wrap it up with a few mumbled calls for ambiguous ‘government regulation’ and an exasperated and almost pleading realization that ‘we have to do something.’

It seems like a colossal wasted opportunity.

Solutions obviously aren’t easy; but there are at least three options that are worth pursuing. Each can have a meaningful impact and taken together perhaps a substantial one. None is simple, each has a different cost, trajectory, and responsible parties – so one would hope work would begin towards all of them.

The first is that we need to stop willingly feeding the beast.

The film makes clear that the platforms and their algorithms run on the data we voluntarily share with them. But most people share vastly more data far more freely than they need to; by changing settings and learning to make better choices and using a few simple tools or utilities each of us can massively reduce what these firms know about us and therefore weaken their ability to use your own data against you. Helping people to make this change is the mission of The Privacy Co. and our Priiv app.

The second is the clear need for governmental regulation (or self-imposed industry policies)

Both GDPR and CCPA are minimal first steps, but they put government in the game in a material way and they’ve put industry on notice that they need to re-tool for compliance and they better start thinking about data minimization, better disclosures, and more generally about the implications of their actions or else they might not like the next regulatory round. Of course, the idea of governmental regulations solving anything is fraught with risk, uncertainty, and potential unintended consequences.

The third option is to fix the system rather than reform it. While most of the executives in the film do suggest that government regulation is needed, they also point out that the business models behind these companies are the ultimate problem.

Let’s be clear, these firms earn their revenue by harming their customers, and damaging the society in which they live. Advertising driven businesses, at least in this modern world, are naturally driven to promoting addiction and leveraging manipulation. 

So why not change business models? To the executives in the film, apparently, that’s unimaginable. 

But why? None of these businesses have to use advertising to drive revenue. They each provide services that consumer obviously value. Facebook earned $112 per user (US/Canada) in 2019, while Google’s advertising business earned $256 per user that year. Twitter earned about $25 and Pinterest about $15 for the year. Netflix for comparison, sits in between at $131. 

Which means roughly that for the price of two Netflix subscriptions users could get a Facebook, Twitter, Instagram, Snapchat, and probably one or two more. For another two Netflix subscriptions users could get Google Search, Gmail, Google Maps, and YouTube. All that technology and talent could work on user delight instead of user abuse. All while they remain rapidly growing and hugely profitable.

Of course, this switch isn’t simple. Users are conditioned to get these things for free. More affluent users generate more revenue and so it doesn’t work if only they switch to a paid model. But the problems are worth solving when you consider the result: the platforms could then focus on serving customers rather than serving them to advertisers.

The Social Dilemma is an understated title. What the film lays out is The Social Catastrophe. It’s impossible to spend two hours watching this presentation and not find yourself awed by the scope of the problem and petrified for our collective future.

It’s been almost ten year since former Facebook employee Jeff Hammerbacher said: “The best minds of my generation are thinking about how to make people click ads. That sucks.” It remains so. 

The people who built these companies, and many more like them, often say they want to solve hard problems. They also often claim to want important work with meaning. 

Well, here you go. Save us and save yourself. Move fast and break these things.

_______________________________________________________________________

Craig Danuloff is an entrepreneur and executive with deep experience defining and managing technology businesses. He has founded and led six venture funded companies, and held senior leadership positions at five others.

Today, Craig is CEO and co-founder of The Privacy Co, creator of the Priiv app, a personal privacy management solution to solve the enormous problem of privacy theft and for consumers to take back control and protect themselves. Priiv is currently available in the app store or at clck.it/priiv.

Craig is a graduate of the University of Colorado College of Business & Administration and the author of over two-dozen books (on Software/Technology).

____________________________________________________________________

Original article first published here: https://theprivacy.com/2020/09/14/the-social-dilemma-is-a-social-catastrophe/

Share: