#338 Musings Beyond the Bunker (Friday April 29)
Good morning,
MISINFORMATION ABOUNDS
As I have written before, we are living through a pandemic of misinformation. Contrary to pre-internet times when one crazy announcement or another had a relatively short life, the proverbial “flash in the pan,” today it’s different. Information, regardless of source or credibility, is distributed and republished in minutes. It is not vetted in public, analyzed by journalists and scholars before it is posted and reposted. And even if the post is demonstrated to be false, perhaps even removed from the platform, the damage already is done. Further, the statements that are made often are anonymous. Those who post disinformation are not accountable, often not even identifiable. No one takes responsibility. Truth is secondary to titillation value.
Those who willingly allow the posting of information have little incentive to monitor or restrict the dissemination of information that often is untrue, can be damaging, and might provoke violence. Section 230 of Title 47 of the U.S. Code grants immunity to the social media platforms with respect to items communicated on their platforms. On the one hand, this might be viewed as encouraging the open exchange of ideas and the marketplace will ferret out the truth from lies. But that’s no longer the way it works. So we are left with the Mark Zuckerbergs of the world to police themselves, while encouraging the government to take control (yet fighting those very controls). Without responsibility for what it is they allow to be posted, these organizations are free to create a system of rules and regulations that suit their purposes. They want more eyeballs on their sites for a longer period of time. And they want eyeballs whose passions have been stirred by the content, so it is the most provocative of information, and information confirming established biases, that most fulfill of their objectives.
The social media platforms claim that there is simply too much information being posted and that they cannot perform the functions of censorship. They say it’s not their job. They claim the government should regulate them. Yet little is being done to purge the marketplace of information from information spread by automated “bots,” manipulated videos, falsified information, and any amount of hate speech. There are a number of arguments against censorship, and we’ve heard them all:
There are simply too many posts to keep track
Censorship is wrong. Information should be ubiquitous
Who are we to judge what is factual and what is not? One person’s opinion is another person’s fact
Bots post a lot of things—they can’t be controlled
We are only providing the means for people to speak. Posting on our platform does not make the platform responsible for what is said.
A SIMPLE PROPOSAL TO ADDRESS A SMALL PART OF THE FACEBOOK PROBLEM
So long as the Facebooks of the world are profit-making corporations, their interest is in expanding content and keeping people’s attention (and, therefore, availability for advertising) as high as possible. This is at counter-purposes with society’s desire to limit hate speech, unfounded conspiracies, and lies from dissemination and republication. By definition, this reduces content and reduces the number of provocative posts. But I have two suggestions to solve at least some of this:
I can’t make headway on many sites without demonstrating that I’m not a bot. You know, this is the annoying, “check everything that looks like a motorcycle,” that one must have to click in order to access a website. Can someone explain why this simple test shouldn’t be mandated by law, thereby putting a significant damper on the Russian, Chinese and other misinformation being planted on our sites and in the minds of our citizens?
We should require people who are posting things on social media (or, for that matter, generally) to self-describe exactly what it is they are posting. If someone alleges that something is news, then label it as such. If it’s opinion, then label it. And if it is satire, let people know that it’s fiction and simply for a laugh. In this way, each time someone receives a post, they are forewarned regarding the intent of the content.
Here's an example of how this would work. Several years ago, a video of Nancy Pelosi was doctored, such that she appeared incoherent and drunk. It obviously was a “deep fake,” a manipulated piece of video designed to misrepresent the actual speech. Facebook maintained that it was not their job to remove such information, as it was a legitimate piece of satire. But it was something that was manipulated in a way intended to mislead the viewer. Under my plan, the person posting this would have had to label it as parody and/or as manipulated content. Once it is labeled as humor or parody, the consumer is forewarned not to accept it as fact. If it is labeled as news, however, then the social media platform would be required to verify its truthfulness. If untruthful, then it could be removed from the platform or be relabeled as false. In essence, the person posting is self-reporting. The burden shifts not on the platform, trying to sift through reams of posts in order to determine whether to limit speech. The burden is on the person with the opinion to label clearly what they think it is they are alleging. And it becomes easier to apply different rules, depending upon the intent of the content.
BROADER APPLICATION
I actually think this idea might be extended to cable news. Those like Tucker Carlson and others who are purveyors of lies and unfounded conspiracies would be clearly labeled as opinion (and, by the way, so would many of the commentators on MSNBC and other stations). Since we have failed to train generations of Americans in critical thinking, at least we can label what they’re getting, so they know whether something is intended for consumption as fact or as commentary or as parody.
Fox News apparently agrees with the characterization of Mr. Carlson as not delivering “news.” Fox is actually using this defense for its benefit in court, winning a lawsuit protecting press rights with a bizarre admission regarding what one hears on Tucker Carlson. Here’s the judge’s summary of the defendant’s argument:
"Fox persuasively argues, that given Mr. Carlson's reputation, any reasonable viewer 'arrive[s] with an appropriate amount of skepticism' about the statement he makes."…Whether the Court frames Mr. Carlson's statements as 'exaggeration,' 'non-literal commentary,' or simply bloviating for his audience, the conclusion remains the same — the statements are not actionable."
They’re not actionable because no sane person with a scintilla of independent critical thinking ability would ever think that what this guy says is anything other than fear mongering, conspiracy-laden, red meat intended to rile the viewers with hate, resentment, and a call to action.
Have a good day,
Glenn
From the archives: