Twitter’s new CEO isn’t concerned about free speech and plans to restrict content sharing, Facebook is clamoring for government regulation in the name of public safety, and there are increasing calls to break up Big Tech, but is any of this necessary when illegal activity is already illegal? Otherwise, our focus should be the First Amendment…
The announcement from Twitter’s new CEO, Parag Agrawal, that the social media company would begin “expanding” their private information policy to include media has prompted critics to claim censorship and call for potential remedies from new regulations to breaking up Big Tech entirely. As we’ve seen in the past, the blog post describing these new changes at Twitter seems innocuous enough on the surface, complete with a peon to the disadvantaged. The company believes “There are growing concerns about the misuse of media and information that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals. Sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm. The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities.” To prevent this harm, Twitter proposes to “take action” when they “receive a report that a Tweet contains unauthorized private media.”
These actions include removing the media in question, and there is some carve out for public figures. “This policy is not applicable to media featuring public figures or individuals when media and accompanying Tweet text are shared in the public interest or add value to public discourse.” The carve out, however, has a carve out of its, own “if the purpose of the dissemination of private images of public figures or individuals who are part of public conversations is to harass, intimidate, or use fear to silence them, we may remove the content in line with our policy against abusive behavior.” The lack of transparency in this standard and the inherently subjective nature of terms like “harass,” “intimidate,” or “use fear” against public figures, some of whom may be engaged in the same tactics, has led many conservatives to believe the company would not apply this standard even handedly. They would instead come down much harder on conservative speech, like when both Twitter and Facebook both suppressed the Hunter Biden laptop story under completely false pretenses.
For its part, Facebook is currently in the odd position of literally begging for government regulation, spending millions of dollars on an ad campaign to create support for Congress to act. The Facebook website declares “We support updated regulations on the internet’s most pressing challenges,” noting the last time “comprehensive regulations were passed was 1996. We want updated internet regulations to set clear guidelines for addressing today’s toughest challenges.” Of course, they can’t do it alone, “That’s why we support regulations to set clear and fair rules for everyone, and support a safe and secure open internet where creativity and competition can thrive.” In Facebook’s opinion, these regulations should include “thoughtful” changes to Section 230, the current regulatory exemption that shields social media companies from liability for content posted on their platforms, unlike traditional publishing companies.
These changes should make “content moderation systems more transparent,” and ensure “tech companies are held accountable for combating child exploitation, opioid abuse, and other types of illegal activity.” They also support regulations around “foreign election interference” including standards for ad “transparency,” plus protecting privacy and data, and enabling data portability. Why anyone would want the government, whose IT systems have been hacked numerous times and are about as porous as enterprise systems get, remains a mystery, but that is somewhat beside the point. Oddly, Facebook is supported in these efforts by a supposed whistleblower, former employee Frances Haugen, who claimed in October that “Facebook’s products harm children, stoke division and weaken our democracy.” “The company’s leadership knows how to make Facebook and Instagram safer but won’t make the necessary changes because they have put their astronomical profits before people.” Ms. Haugen agrees with Facebook itself, saying “Congressional action is needed. They won’t solve this crisis without your help,” apparently never hearing President Ronald Reagan’s old axiom about the ten most terrifying words in the English language.
What kind of help might that be in this case? What, precisely, would these new regulations look like, particularly around content moderation and advertising? Unfortunately, neither Facebook nor the “whistleblower” actually comes out and says it, as if these details weren’t incredibly important. If discussions in Democratic circles and the actions of the Biden Administration are any indication, however, they are referring to some kind of complex government scheme to control and police content, essentially putting some new agency in charge. Earlier this year, Politico reported that the Biden Administration was working with allied groups including the Democratic National Committee to engage “fact-checkers more aggressively and work with SMS carriers to dispel misinformation about vaccines that is sent over social media and text messages.” Biden himself said the social media companies are “killing people” and there were reports they were pushing for Facebook, Twitter, and others to collaborate on a banning process, where a person banned from one platform would be banned from all.
In other words, government bureaucrats, who progressives assume would be left-leaning, would determine what is acceptable speech and set the appropriate speech codes. The First Amendment, that is your right to speak your mind freely without fear of censorship, need not apply. Ironically, Twitter’s new CEO, Mr. Agrawal actually came out and said this last year in an interview with Technology Review. “Our role is not to be bound by the First Amendment, but our role is to serve a healthy public conversation and our moves are reflective of things that we believe lead to a healthier public conversation.” He continued, “The kinds of things that we do about this is, focus less on thinking about free speech, but thinking about how the times have changed.” Facebook’s CEO, Mark Zuckerberg, has not been as outspoken in public about the need to adapt free speech to changing times, but the actions of his company certainly don’t suggest a healthy respect for this fundamental right. This was the same Facebook that suppressed content from the third largest newspaper in America in the middle of an election, falsely claiming it was misinformation. A former President of the United States has also been banned indefinitely from the platform. Overall, it doesn’t appear that progressives in general have much interest in the First Amendment anymore, unlike the height of the free speech movement in the 1960s. Today, it’s not uncommon to hear opinions like The View’s Joy Behar, who believes it needs to be “tweaked” in the modern era, along with the Second Amendment.
Lost in all of this discussion around regulation and changing the First Amendment is why any of this is truly necessary: Why can’t “regulating” social media be handled under existing law? What is so radically different that requires some new government scheme instead of the hundreds of years of Constitutional and case law built up around every type of speech imaginable? Putting this another way, illegal activity is already illegal. There are criminal laws in place right now that protect against threatening people, defaming people, spreading false information, exploiting children, and more. If I threaten to kill somebody on Facebook, I’m likely committing an illegal act subject to prosecution. In addition, there is no shortage of existing means to hold companies accountable by making them liable for the decisions they make. Companies that violate criminal law can be prosecuted and individuals can bring civil lawsuits if their rights are infringed. What more do you really need?
Of course, the social media companies are currently protected from lawsuits via Section 230, the original devil’s bargain. The exchange was simple at the time: Offer social media companies freedom from lawsuits if they guarantee a free and open platform for users. The idea was straightforward: Companies shouldn’t be liable for content they didn’t create and publish. A traditional newspaper exercises editorial discretion and has full control over what they produce. Therefore, they are liable for their output. Social media was different: The content is created and posted by other users with the social media company itself acting like a platform. How could they be liable if I threatened or defamed someone? This worked for a while, but then the social media companies started acting more and more like publishers, moderating content, marking it or suppressing it according to their whim, and sometimes even lying about it.
It seems overwhelmingly obvious to me that our goal should be to return these platforms to their original purpose, as free and open forums where protection of the First Amendment is paramount above all else. To the extent that new regulation is required, it should be exceedingly sparse and focus on transparency and accountability. Transparency would require social media companies to make public the arbitration of content moderation disputes at the request of the parties involved and accountability would make them liable for those decisions, particularly when it comes to violations of the First Amendment. The balance between the two would allow social media companies to regulate content that appears to be illegal such as threats, defamation, revenge porn, child exploitation, and other activities in clear violation of existing law, removing it immediately without needing to rely on law enforcement.
If the company in question removes content that doesn’t fit this criteria, however, the user would have the option to make that public and they would be legally liable for their decision. It’s highly unlikely someone committing an illegal act would want the details of those actions posted publicly, so we can safely assume that most of these public disputes would relate to free speech. We can also assume, reasonably enough, that social media companies would err on the side of protecting speech if they were liable for violations of the First Amendment. This liability would primarily be enforced via civil lawsuits. Companies like The New York Post that had their First Amendment rights violated could sue on their own, individuals can group together and engage in class action lawsuits. Conceivably, fines could be issued as well, but I’m not certain they would be required given the threat of civil action. A streamlined adjudication process in the courts might also be warranted. Whatever the precise details, the combination would result in companies that are biased towards openness and the free dissemination of information while still being able to police illegal content.
If this seems a bridge too far, consider that a model for navigating complex legal issues in the social media space already exists: Google’s YouTube platform deals every day with copyright law, balancing the rights of paid content creators to reuse assets for movie reviews, sports programming, and more. The process isn’t perfect, but copyright holders can flag content they feel violates their intellectual property. Content creators can appeal if they feel it doesn’t. Both parties have an opportunity to respond to arbitrate the dispute. The government and the courts are not involved. Similar to my assumptions above, content creators that are stealing intellectual property don’t respond, and nor do copyright owners making spurious claims. The system is largely self policing and exists without complex government regulation. To be sure content creators would likely claim the process is biased in favor of protecting copyright holders and certainly any process can usually be improved, but the fact remains Google came up with this entirely on their own by translating existing law into the social media sphere.
Now, however, we’re supposed to believe that the same is impossible for other social channels when it comes to something as fundamental as free speech and open debate. It’s not. They’re saying that because they have other plans, mainly the government regulation and policing any and all speech. They will decide what is allowed and isn’t, what’s misinformation and not. That is the goal and that’s what all this is about, though of course they can’t come out and say that publicly. Fair minded people who are concerned about the First Amendment should reject these approaches in favor of one centered around supporting free speech and making the benefits accessible to all. We should have no other goal. It is the first, and perhaps most important, amendment after all.