Broader Implication of the Mis/Disinformation Epidemic: Methods to Circumvent Censorship

Sheer Karny
12 min readNov 10, 2020
credit to treehugger.com

Introduction

Following World War II, the United States engaged in a propaganda war against Russia with an aim to establish democratic institutions in nations still unclaimed by Russian communism. This conflict is known as the Cold War, and its propaganda war also targeted domestically at citizens of the US. During this period of time, the US limited the free exchange of ideas by silencing domestic communists in order to eradicate communist ideologies from the country. The US continued to promote positive US information until the end of the Cold War in 1999.

Fast forward to the 2016 election, the United States encountered Russian entities attempting to undermine its democratic process. The Kremlin, the alleged perpetrators, used and continue to use a combination of internet trolls, people tasked to create disinformation, and internet robots, automated algorithms posing as human beings that are tasked to repost trolls’ fake news. Consequently, social media platforms are flooded with manipulative headlines claiming to be breaking news. This tactic allows for disinformation to trend social media sites. As a result, false news and political conspiracies are introduced as legitimate media to the public. One of the goals of the Russian disinformation campaign was to boost the public’s viewership of radical political ideologies to polarize the political environment. In response to the continuingly polarized political climate, US government institutions and social networking companies have adopted censorship codes that target disinformation. This essay aims to educate college students on the implications of persisting disinformation, the effects that censorship laws have on freedom of speech and alternative methods to censorship that can mitigate the harmful effects of fake news. Although social media platforms and governments attempt to mitigate harmful mis/disinformation to preserve democracy, history shows that broad-scoped regulation on harmful speech often limits the public’s access to free speech. Current legislation to counteract mis/disinformation follows this narrative of broad scope limitations on speech. With this in mind, it is important to consider a similar case seen on college campuses where administrators enact broad anti-hate speech codes meant to protect minority students against bullying and harassment, but instead dilute the free exchange of ideas on campus. Governments and media companies may utilize their leverage to decide truth and censor viewpoints that do not fall in line with their administrations’ ideologies or sense of truth. Thus, the only way to circumvent potential dystopian use of censorship and address the harmful effects of misinformation is with more information and better education.

Current Laws Countering Disinformation

History indicates that all attempts to alleviate social unrest by means of censorship results in breaches of freedom of speech. In late 2016, President Obama signed the “Countering Foreign Propaganda and Disinformation Act” (CFPDA), and so created a government organization, the Global Engagement Center, to counter foreign propaganda (Committee of Foreign Affairs). In a critical analysis of this legislation, “Does the ‘Countering Foreign Propaganda and Disinformation Act’ Apply to American Independent or Alternative Media,” the author, Lambert Strether, argues that overly broad terms in the the CFPDA could grant the government power to censor domestic media. Strether highlights that this agency’s purpose is to “Counter foreign and non-state propaganda and disinformation efforts aimed at undermining United States national security interest.” The intention of this legislation is to respond to the Russian interference in the 2016 election. The United States’ government has an obvious obligation to protect the democratic process from foreign influence. Unfortunately, the US government’s actions to counteract Kremlin funded disinformation efforts may result in an agency with overly broad powers to censorship. As seen in the section, “Drafting Ambiguities in the Purpose of the Global Engagement Center” the Act drafts multiple broad definitions of non-state actor. Strather argues that “the meaning of ‘non-state actor’ is elastic.” Strether also cites multiple precedent-setting legislations, with one defining a “non-state actor’ [as] any entity outside of government.” Moreover, legislation drafted by the Department of State (DOS) in 2017 offers evidence supporting the overly broad powers that this new agency may obtain. In this measure, the DOS calls for the president to “review and identify any non-state actors” and in turn “submit a report to the appropriate congressional committees detailing the reasons for such designation” (Strather). Strether responds that “the Act does no such thing [President Barack Obama does not go through the DOS measures] ” and “can certainly build a case that the Act applies domestically to ‘alternative media’ ” like the journal he writes for. One distinction made in “Nonstate Actors: Impact on International Relations and Implications for the United States” by the National Intelligence Committee is “The impact of non-state actors is context-dependent” (4). With this contribution to Strethers argument, it becomes clear that the ambiguous nature of non-state actor may allow politicians and other government figures to conveniently silence controversial testimonies (like Edward Snowden’s account of the NSA breach of privacy) that make the political bubble more transparent. If this kind of censorship becomes commonplace, it is likely the greater American public will experience a breach of the First Amendment. Therefore, this Act’s intent to limit disinformation follows a broad-scoped and easily manipulated narrative found in almost all speech-limiting codes. Furthermore, historical censorship controversies offer a clear perspective to what can happen if the US were to enact broad-scoped limitations on mis/disinformation.

Unsuccessful History of Speech-Limiting Codes

A glaring parallel to the disinformation epidemic is seen in American universities’ battle against hate speech. Here, university administrators attempt to remove hate speech from their campuses by censoring perceived hateful ideas. As seen in the book Free Speech on Campus, universities which grant administrators authority to censor harmful and offensive language often breach student’s and faculty member’s freedom of expression. Again, censorship on the basis of overly broad definitions (at universities it is hate speech) limits the free exchange of ideas. According to the book’s authors, Chemerinsky and Gillman, there lies “no middle ground” to what is and is not an expression of free speech because “History demonstrates that there is no way to define an unacceptable, punishment-worthy idea without putting genuinely important new thinking and societal critique at risk”(63). Student and faculty voices are silenced when universities attempt to determine what speech is hateful and what speech is acceptable. This parallel strengthens the notion that governments may leverage their newfound censorship powers to determine what is and is not truth in order to guide speech to fit the government’s narrative. So to avoid any 1984-esque use of censorship, the public should not entrust the government with censorship powers. However, the government is not nearly as influential on public discourse as the companies that host social networking sites. Society faces private entities who have standards of free speech that are not upheld by public legislation, lack transparency with the information they have on their users, and essentially control the public’s access to information.

Can Social Media Companies Be Responsible Enough to Censor?

With no transparency and obligations to the public, social networking sites like Facebook and YouTube will abuse speech codes to limit and direct free speech on the internet. As seen earlier with governments, it is unlikely that social media companies will adopt precise codes to censor disinformation. It is also unclear what they plan to do to counteract disinformation. A potential clue to their approach can be found in these companies’ anti-hate speech codes. In the article, “A Brief History of YouTube Censorship,” Jillian York of the tech-news organization Motherboard, describes the challenges that YouTube, and generally social media companies, face when trying to uphold free speech and a business. Recently, YouTube has received backlash for abusing hate speech codes to limit conservative and controversial voices on their platform all in order to retain advertisements. York mentions that “following the deadly school shooting in Parkland, Florida, YouTube has banned firearm demo videos and content promoting gun sales and has taken steps to remove conspiracy videos.” Here, YouTube adopts a stance against controversial issues and utilizes its influence on the public stream of information to limit controversial voices. YouTube’s actions are an attempt to appease advertisers who would otherwise pull their sponsorship if their products/services were to be displayed alongside controversial speakers/ideas. This is eerily similar to universities who censor in accordance to students who fund them with tuition fees. Both companies and universities are heavily inclined to appeal to those who fund their operations. However, their obligation to free speech principles supersedes any financial motivation. Clearly, there is conflicts of interests that arise when running a corporation and being “an arbiter of speech, not merely a technology company” (York). So, social media companies should be responsible for maintaining platforms for free public discourse. As seen, YouTube’s actions against hate speech do not amount to a protector of free speech, but to an authoritarian vetting of ideas. Consequently, YouTube’s and other social media companies’ approach to censor disinformation may not fully uphold free speech principles. This concern is also voiced in the article, “How Europe Fights Fake News,” by Prof. Anya Schiffrin of Columbia University. Schiffrin notes that when social media companies censor, the result is “privatized law enforcement and speech restriction… Even when intentions are good, the risk is that lawful speech will be restricted without judicial process.” Leaders in these institutions do not need to be inherently bad in order for the principle of free speech to be lost in attempts to limit false news. Therefore, people should remain skeptical of any large scale censorship of mis/disinformation which media and data superpowers like YouTube, Google, Facebook, and Amazon may enact. For this reason, social media companies should not remove mis/disinformation from the internet and instead employ different strategies to lessen its effects.

Alternatives to Censorship

Alternatives methods to censorship are needed to counteract the harmful effects of disinformation since the public cannot entrust governments and media companies with censorship power. Specifically, technologies that provide users with more information and education are needed for a better information environment. In Janna Anderson and Lee Raine’s article, “The Future of Truth and Misinformation Online,” experts offer their prospective takes on the disinformation crisis. One expert mentioned is Howard Rheingold, a leading expert in communication technologies and professor of such topics. He predicts that “some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online”(Anderson & Raine). According to him and other experts, technologies will provide important context that will vastly improve a person’s ability to assess the legitimacy of news. Moreover, proposed add-ons to social networking sites will function to “reward logic” (Anderson & Raine). An information society must identify truth from logical assessment in order to curb ignorance and false narratives. Unfortunately, sensationalist headlines are programmed to instantly grasp a viewer’s interest and act counterintuitively to logical thinking. These clickbait articles do not give people a chance to skeptically review a headline’s language because of their sensational language. Future technologies can mitigate this influence by providing information that will combat users’ instinctive curiosity and make logical assessment easier and more commonplace. Thus, people will better and more often evaluate factuality from sensationalism.

Unsurprisingly, money functions as an incentive for fake news publishers to flood the internet with false narratives. Advertisement algorithms on social media networks allow these bad actors to target specific demographics and simultaneously make money from ads they run on their sites. Fake news publishers utilize the data that social media companies have on their users (political leanings, social demographics, and other crucial information) in order to feed users group-specific inflammatory headlines. With these tactics, publishers garner many views and make money from the ads on their respective websites. Consequently, fabricated stories with false truths become official reports and topics of political discussion. As mentioned by Anderson and Raine, “removing their [bad actors] incentive could stop much of the news postings.” Now consider how future technologies can affect these practices. One proposed system is a “trust index,” a logic-rewarding tool, that will help verify “how much the source is perceived as trustworthy”(Anderson & Raine). All articles will receive scores of factuality from this index and display them to the user. This allows users to make an informed decision to whether or not they want to view said news based on its trustworthiness. Once people come across articles with low confidence scores, they will likely choose to not view these articles similarly to how people tend to not enter websites or download files that security softwares warn as unsafe. Thus, publishers will not receive as many views for fake news and less money will be be made from targeted fake news posts. Henceforth, fake news has a diminished presence in social networks and a lesser influence on public discourse.

Counter Argument

Although these prospective technologies can revolutionize media interactions, there is no guarantee that these technologies will be accurate analyzers of truth since current technologies are vastly limited. In Anderson and Raine’s article, Jason Hong, a computer science professor at Carnegie Mellon, highlights that “The problem is that it’s still very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation.” What Hong refers to is a field called natural language processing (NLP). Currently NLP is in rudimentary stages relative to what is needed from a program to accurately discern factuality. Reliable text processing systems are required in order to provide sound information to users about any news source. Another cause for concern is found in the study, “Growing Bot Security: An Ecological View of Bot Agency,” where professor Richard Guilbeault assesses the current state of bot detection technologies. Guilbeault cites that “a recent bot detection competition showed how the leading detection algorithms are significantly limited, with each requiring error prone human supervision at several stages of analysis” (5005). One method to discern a media sources truthfulness is to identify its source and who shares it. So, it is important to be able to identify bots which spread mis/disinformation. As a result, text analyzing systems require trustworthy and efficient bot detection technologies in order to provide accurate assessments of media sources. Given the current state of technology, there may be unintentional censorship if social media companies deploy these programs without proven efficacy. Moreover, the timeline to when the market will receive such systems is completely unknown. So, the public must entertain a future where mis/disinformation goes unchecked.

Rebuttal

In preparation for unsuccessful technologies, governments and media companies must make a concerted effort to educate the public to manually discern factual information. According to the author of “How Europe Fights Fake News,” some key players like “Facebook and various foundations, want to fund more teaching of media literacy in schools so that news audiences will become more discerning consumers” (Schiffrin). It is paramount for students and the general public to be adequate skeptics of source legitimacy. This society needs critical thinkers, not people who are overly reliant on technology to deduce truth. Individuals who are competent media literates would be “better able to understand the complex messages we receive from television, radio, Internet, newspapers, magazines, books, billboards, video games, music, and all other forms of media” (Media Literacy Foundation). Social media users, companies, and governments will not need to bet on hypothetical technologies in the search for factuality. In reality, social media companies and the government will continue to invest in future technologies that will be able to discern truth. Fostering a media literate society will better equip people to use these prospective technologies once/if they materialize. Considering these future outcomes, the most worthwhile investment authorities can make to combat mis/disinformation is to educate the public in media literacy.

Conclusion

Foreign and domestic mis/disinformation campaigns increasingly polarize the political climate and make civil discourse more challenging than ever. The United States government and social media companies are pressured to enact censorship codes in order to lessen the influence that bad actors have over public discourse. A cause for concern is that there are many similarities between speech codes against disinformation and anti-hate speech codes on college campuses. This indicates that free speech and civil discourse is vulnerable to infringement if governments and social media companies mitigate disinformation with censorship. This path of censorship will likely shape social narratives, silence critics and controversial voices. So, solutions must circumvent temptations for censorship. One possible solution is in future technologies that mediate the harmful effects of misinformation by providing users with more information about a text’s factuality and source legitimacy. Users will be able to make informed decisions and be less influenced by false narratives. The issue with these technologies is that they lack strong proof of concept. Therefore, it is naive to assume that advanced technology will exist in the near future. So, authorities must find an immediate response to the disinformation crisis. Such a response is education in media literacy. This would allow people to discern truth from the overwhelming amount of fake news all without needing to rely on technology. Fortunately, alternatives to censorship can potentially remedy the polarized state of civil discourse all without risking free speech. This topic of discussion has arisen only recently with many of its issues lacking formal academic assessment. More formal research is needed in order to thoroughly understand the implication that disinformation has on social interactions, the government agency tasked to limit disinformation, social media companies’ policies, and technologies and other techniques that are being developed to counteract disinformation.

--

--

Sheer Karny

I am a fourth year Cognitive Science major at UC Berkeley.