Good morning,
A BASIC THOUGHT ABOUT SOCIAL MEDIA
I have attended several sessions on social media and artificial intelligence this month at the Aspen Ideas Festival. All of them acknowledge that technology is moving fast and that there are great opportunities and great risks posed by these advancing technologies. How we respond to them will be the subject of several snippets in these Musings in the coming weeks.
It is settled that the major platforms of social media (Facebook, Twitter, TikTok, Instagram) are major contributors to loneliness and depression. It’s also clear that their effect has the greatest impact on teens, and young girls in particular. Social media—and the inability to disengage from the relentless attraction and algorithms designed to keep people there—is a major factor in the exploding mental health crisis and increased suicide rates among our youth today.
Obviously, all of the social media platforms operate on a profit model. In order to generate more revenue, they have to sell ads that, in turn, sell products. They have determined that their best model is to keep users’ eyes on their sites as long as possible and that the best way to do so is to perpetuate postings that enhance confirmation bias and that bring about outrage. It has been shown that a post that generates outrage is reposted seven times more than one that is not. Is it not surprising that, faced with the opportunity to multiply viewers and extend their time on line that platforms are reluctant to modify their algorithms to stifle these postings?
Parents are burdened with having to monitor their children’s access to, and behaviors on, social media. We are not well-equipped to do so, partially because we are not sufficiently adept and experienced in this realm but also, we simply don’t know how it works and how to better use it in light of its construction, its attributes and its limitations. Here is a thought experiment: We don’t ask parents to examine a car before its use, check all that goes into the car and make sure it’s safe. We trust that it has been built within certain agreed safety parameters (via regulation!) and are comfortable getting behind the wheel. With internet platforms, not only are they essentially unregulated (and their owners protected from liability in a manner all other media are not) but we have no idea how they work and what goes into the algorithms.
It is shocking that there is there no regulatory regime intended to monitor, evaluate, and provide certification of, these sites. It is shocking that we don’t really know (or understand) how the “black box” of feeding stories, ads, and content works. Just as we don’t allow food producers to put dangerous product on grocery store shelves, we should demand accountability in this area as well. The FDA has the right and responsibility to make sure our food is safe. Why don’t we have a federal regime to ensure that our social media sites are safe (or, at least, safer)?
A few years ago, I attended a talk by Mark Zuckerberg at the Aspen Ideas Festival. A condition of his participation was that he was not required to take questions. In the midst of the “softball” questions pitched to him, he said to his interviewer that it shouldn’t be up to Facebook to monitor hate speech, pornography, and incitement to violence. He disingenuously instead insisted that it was government’s responsibility to regulate these social media platforms. It is all very well and good, as Facebook and the other platforms try to buff up their public image, claiming regulation would be welcome, but they are among the top funders of campaigns and lobbyists, at least partially financed in order to thwart any meaningful regulation.
At the risk of simplifying the issues, there are two main areas where social media platforms could be reined in, at no significant cost to them, but avoiding the significant societal costs they incur:
What platforms choose to post; and
How platforms choose to direct our attention
TWO SIMPLE PROPOSALS FOR WHAT PLATFORMS POST
The social media platforms are protected by the now infamous Section 230. What it does is insulate the platforms from liability for what’s posted and re-posted. This law, enacted in the 1990s, when people had no idea what the world would look like today, provides a safe harbor for these companies to grow with abandon, allowing as much on as possible. In essence, Section 230 states that an on-line platform cannot be liable for content posted on that site. This is of course quite different from a television station or newspaper that publishes false, libelous, or incendiary content. Because the platforms are cloaked in this cloak of invulnerability, the incentives for monitoring content on their site largely are gone. Sure, they may have their committees to review content, but it really hasn’t worked well.
Anyone else who republishes or spreads libel is liable. Facebook and other platforms have no such fear. As such, there is no legal impetus to be vigilant. There needs to be a change, but a repeal seems unrealistic. So how about the following:
Get rid of the bots. Bots are a big part of the problem, as they can multiply easily and can “flood the zone.” One way to reduce hate speech and incendiary speech is to reduce the ability of a bot to post on-line. What if we were to require everyone trying to post to demonstrate they are human? Many websites require us to make such a demonstration. It seems it would be easy for each platform to require posters to clear a “Capcha” or similar verification system to confirm the poster is not a bot. This is pretty simple technology that already exists. It may result in fewer posts, but at least they will come from real people. I think there will be push-back to the idea of creating even a small barrier to posting, despite how it would reduce stress, depression, election interference, and violent talk.
Shift control to the posters and the readers--Divide and conquer. The platforms have tried to give the appearance of caring and regulating themselves. Giving them the benefit of the doubt, perhaps they have tried but have been unsuccessful to date. So let’s change the paradigm and share the burden of monitoring content. Let’s crowdsource the policing of the platforms. Just as users create content on Wikipedia and monitor that content, what if those reading posts on the platform are empowered to act as the content monitors? Let’s shift the burden of defining the items being posted to the posters and shift the burden of characterizing posts to the readers.
The platforms claim that the sheer number of postings makes it impossible for them to adequately police content. They say they are doing their best and the sheer volume of posts make it impossible to do better. This begs the question, of course, of why they felt the need to grow so large without accepting the responsibility associated with that growth. But perhaps we can make it easier for them by requiring that people label their posts in a manner that will divide posts into “minimal review required” and “maximal review required.”
We should require those who are posting to label their posts, in order to make it easier to evaluate their veracity and acceptability. If we required each post to be accompanied by a category, the readers could better ascertain the intent and, perhaps, govern their responses accordingly. Categories might include: scientific analysis, news, opinion, and parody/comedy. Shifting “self-labeling” to viewers shifts the burden of responsibility. Once categorized, those receiving the content are better informed as to the content.
So what would happen next, if there is a concern regarding veracity or hate speech or other issue? If a reader of a post complains to Facebook (or similar platform) that the categorization is wrong or that the material is unacceptable, then it is incumbent upon the platform to confirm whether the post, in fact, is consistent with the labeling provided by the poster and review accordingly. Presumably, something labeled by the poster as parody would not be subject to a deep review. But if someone posts something claiming that it’s news and a reader claims it is not news (but opinion or parody), then the platform would have the obligation to confirm the accuracy of the post. By way of example, a deep fake can be posted as parody but it cannot be posted and labeled as fact. If it is, then when the platform is advised of the problem, it would be deleted. Similarly, misinformation on the measles vaccine can be posted as opinion but not as fact. If a platform takes no further action after a complaint that an item does not fall within a particular categorization, it is liable, notwithstanding Section 230.
ONE SIMPLE CONCEPT ABOUT WHERE PLATFORMS DIRECT ATTENTION
A common refrain from the media giants is that there are complex algorithms that determine what sorts of things we would want pushed to the top of our social media streams or pushed to the top of our web searches. The media moguls act as if these are computers that somehow are connected into our innermost thoughts and desires, powered by an anonymous algorithm that effects this magical connection. This, of course, is sophistry. The algorithms are created by people; they are not some sort of all-knowing and infallible instrument.
Plus, the algorithms fundamentally are not merely tools to fulfill our perceived desires. There aren’t two parties to the analysis, but four. In addition to the user and the information they are being directed toward, standing between these are the platform and advertisers. Search results and data streams are designed to serve the needs of the advertisers (who want to sell stuff) and the platforms (who want us engaged and, if possible, outraged, so as to extend our visits as long as possible). There are moves afoot to require disclosure of the algorithms themselves, so we all can see what the platforms have designed to manipulate our interests and behaviors. There are moves to allow the users to choose the type of algorithm used to sort and direct information. Knowledge is power. It has been kept from the users, so as to maintain the power of the platforms. It is time that this knowledge—this power—is shared. Power to the people, baby!
Just a few ideas. I’m sure there are other good ones. But regulation of what is on these platforms—which are contributing to violence, depression, and other maladies—must come.
More to come.
Have a great day,
Glenn
You need to define "hate speech" and "incendiary speech" in a fair and meaningful way, if at all, before you start suggesting a regulatory structure that will limit free speech guarantied by the First Amendment, and both the definitions and regulation must err on the side of permissiveness, if it is to be upheld and credible.