Let’s Talk About The Other Social Dilemmas
Blog Post

Shutterstock.com / Bloomicon
Oct. 29, 2020
In September, Netflix released the docudrama The Social Dilemma, which explores the role social media plays in today’s world, and some of the harms it creates. As presented in the film, these include fostering addiction to technology and harming users’ mental health, as well as deepening ideological polarization and political divides. The film further emphasizes that many of these harms are driven by internet platforms’ business models, which rely on the vast collection and exploitation of user data to fuel algorithmic systems that primarily generate revenue through targeted advertising.
The Social Dilemma raises important points, and its popularity helps widen a much-needed conversation about these issues. However, the film’s critique of social media is missing key elements which are integral to understanding the profound influence platforms’ algorithmic systems hold over society. Without this understanding, the audience is left without the tools to discuss how we can truly hold platforms accountable for their harms.
As the film outlines, most internet platforms’ business models are part of the surveillance capitalism economy, in which companies commodify personal user data to generate profit. This is primarily done through algorithm-driven targeted advertising. While “surveillance capitalism” was originally developed and coined by social psychologist, author, and Harvard professor Shoshana Zuboff, numerous further analyses confirm and emphasize the role that internet platforms’ business models play in facilitating and amplifying harms caused by social media.
Many of the algorithmic systems that platforms deploy are designed to optimize for a range of metrics and signals, including “watch time” and “engagement.” By emphasizing these signals, platforms aim to serve users with engaging and “relevant” content so that they can retain user attention and deliver more digital ads—their primary source of income. As outlined in the film, these engagement-driven systems have also come to foster addiction among users. Further, they have also resulted in the recommendation and amplification of concerning and harmful content such as conspiracy theories and misinformation, which often garner greater user attention and are therefore considered more “engaging.”
However, the film does not discuss the disproportionate harms these algorithmic systems have had, and continue to have, on communities of color and other marginalized groups. Over the past several years, researchers have demonstrated how algorithmic systems employed by major internet platforms can silence the speech of users of color and foster discriminatory outcomes when making decisions—including in life-altering situations. Relatedly, in March 2019, Facebook agreed to pay approximately $5 million to settle numerous lawsuits that claimed the company’s advertising platform enabled discrimination in housing, employment, and credit ads. The settlement came after landmark lawsuits, such as the one filed by the National Fair Housing Alliance (NFHA) and three of its member organizations, which claimed that Facebook’s advertising platform permits housing providers to engage in illegal housing discrimination by allowing advertisers to segment which audiences see housing-related ads based on characteristics such as race, religion, sex, and disability. The Department of Housing and Urban Development (HUD) also charged Facebook with violating the Fair Housing Act.
Scholars and activists such as Safiya Noble, Virginia Eubanks, and Joy Buolamwini have written extensively about how technology, particularly algorithmic tools, can be used to fuel existing social inequities and reinforce these harms in the digital space. These diverse perspectives are integral to painting a complete and accurate representation of the harms that these platforms and their systems can and have already caused, and to finding a path toward meaningful algorithmic and platform accountability. Any conversation with these goals should integrate subject matter experts from a range of disciplines, including computer science, race and gender studies, political science, civil rights, public policy, and so on.
It is also important to strike a careful but necessary balance when discussing the societal impacts that social media platforms are having. There is no doubt that these platforms have contributed significantly to political polarization and the distribution and amplification of misinformation and disinformation, as well as posing numerous threats to human and digital rights. These impacts have not gone unnoticed. According to a recent Pew Research Center study, approximately 64% of Americans believe that social media is having a mostly negative effect on the state of affairs in the United States today.
However, across the globe, social media platforms have also helped to connect billions of people, democratize information flows, enable community building, empower grassroots advocacy and citizen movements, and facilitate social change, often benefitting marginalized and vulnerable communities. As we think through how platforms need to adapt their operations to become more rights-respecting and address the harms that their services cause, we must also consider how to ensure that these transformations do not end up harming the communities that benefit from these technologies. As experts have noted, this is the true social dilemma, and one that must be top of mind as we forge ahead on efforts to hold platforms accountable for their technologies.
Finally, how can we hold platforms accountable for their use of flawed algorithmic systems and for their ever-expanding role as gatekeepers of online speech? This is a conversation that is happening across the globe, from the European Union to India. In the United States, the First Amendment of the U.S. Constitution imposes unique constraints on the extent to which U.S policymakers can regulate how companies decide which content to permit and amplify on their platforms, making this a complicated issue. However, policymakers can and should begin by clarifying that all offline anti-discriminatory statutes, including the Civil Rights Act of 1964, the Fair Housing Act, and the Voting Rights Act apply in the digital environment, and where necessary, move to fill gaps or clarify the applicability of such laws through appropriate legislation.
In addition, policymakers should enact rules to require greater transparency from online platforms regarding their use of algorithmic systems for processes including content moderation, content ranking, ad targeting and delivery, and in recommendation systems. This is an area that OTI, along with numerous other civil society and academic partners, have been pressing companies to improve in. Over the past several years, we have seen some improvements, with platforms providing greater transparency around the use of automated tools for content moderation, for example. However, there is still a long way to go before these efforts to provide transparency and accountability are meaningful and granular enough.
The Social Dilemma is a good first step toward engaging society in discussions of how internet platforms use their algorithmic systems to shape our online and offline experiences. Ultimately, however, if we want to properly tackle these very real and consequential issues that social media platforms have introduced into society, the first step is to name all of them head-on.