#464 Musings Beyond the Bunker (Friday September 23)
Good morning,
MAINTAINING BOUNDARIES
There is an old adage that “good fences make good neighbors.” The corollary of this maxim, and a mantra of today’s generation, is that people need to “establish boundaries.”
The common wisdom is that we should establish boundaries in our relationships. The idea is not to let other people get out of line; don’t let their expectations govern your life; don’t accept unsolicited advice; don’t allow them to interfere with you and your desires. Establishing boundaries also means that people who engage in speech with others must honor the established boundaries regarding topics that are off-limits or the personal sensitivities of the recipient. So, on college campuses, the boundaries of delicate students, all with established belief systems honed through years of study and experience (yes, I’m being sarcastic) must be honored. And, of course, there are boundaries around whether one can challenge any notion that doesn’t meet any prevailing orthodoxy. Of course there are circumstances when revisiting prior traumas can elicit stress and discomfort, and controlling for such traumatic events is part of self-care. But more often than not, the establishment of boundaries derives more from preconceptions that might not stand the test of scrutiny, close-minded certainty as to one’s perspective, or unusual sensitivity and judgment of the speaker when he or she speaks. I’ve been thinking about the drawing of boundaries and I just don’t think that more boundaries, in many cases, is really what we need more of.
What I have learned over the years is that our minds and our beliefs should be plastic. I often have learned more from people with whom I disagree than from those with whom I agree. Sometimes, we must make ourselves vulnerable to harsh words, challenging ideas, and horrific descriptions and photographs. It is not enough to learn about slavery simply through vanilla descriptions and numbers. Photographs of former slaves, their backs exposed to show the welts from whippings, demonstrate the brutality of this institution in a way no words can. Descriptions of the concentration camps and understanding the plight of a single family is more instructive than the abstract number “six million.”
Then there is the balancing of ideas. Should we avoid discussing affirmative action because of the “microagressions” that may cause? Is it fair to discuss whether a white actor can play a Latino person in a movie? Can one be against racism without being the “anti-racist” Kendi describes? Is a Mexican themed party, with margaritas and mariachis, really a racist cooption of culture? Is learning about the racism extant in our history really an attack on our nation’s ideals and goodness? We should be listening more—even to things we find displeasing.
Having looser boundaries means that we welcome honesty and disagreement. We expose ourselves to other ideas that can be helpful and instructive. Loose boundaries mean that we get advice and deal with people as people. It suggests taking risks in relationships and showing vulnerability. And more fluid boundaries might even allow us to be better people. “Establishing boundaries” often suggests to me that we aren’t secure enough in our own positions to withstand criticism or disagreement. Conversations can become stilted and relationships can become awkward when we tiptoe around each other. I worry that “establishing boundaries” is another way of saying, “I can’t handle honest disagreement.”
Boundaries should be like backyard fences—establishing limits but also porous enough to be susceptible to the occasional breach. They are unhelpful if they become more like concrete walls—tall, imposing, casting a shadow, and impenetrable.
ARTIFICIAL INTELLIGENCE
Artificial intelligence (“AI”) is the next frontier. I am trying to learn more about this challenging—yet concerning—opportunity. Thankfully, those who are at the leading edge are taking a thoughtful view of its roll-out, having witnessed the destruction existing platforms (particularly in social media) have created. Leaders in the field of AI have instituted something called “Open Artificial Intelligence.” There is an excellent article on the advances in, and challenges posed by, AI in the April 17th issue of The New York Times Magazine.
One purpose of “Open AI,” is to lay out specific prohibited uses. The developments in the area of AI are subject to open software, but license for the use of that software explicitly forbids the use of these tools to “determine eligibility for credit, employment, housing, or similar essential services.” Also excluded are payday lending, spam generation, gambling and the promotion of “pseudopharmaceuticals.”
The Times article if fascinating in setting out the limits and accomplishments of AI today. The article discusses how machines are now learning how to respond to questions through the exercise of predictive logic. The writing generated by these machines is in many cases still clumsy and missing the mark, but in many cases is eerily canny in its style. One example was a prompt to “write a paper comparing the music of Brian Eno to a Dolphin.” The answer was serviceable, while not brilliant. But it is getting better.
In some ways, Open AI is the answer to Maxwell Smart’s observation regarding Mr. Big, “If only he could have turned his evil genius into niceness…” These people are at least trying to cut AI off at the pass and redirect the tendency toward evil genius into a nicer, more humane, outcome.
AI is going to get better and better. We should be concerned and we must establish guardrails. In some cases, we should be worried about our jobs and those of our fellow Americans. We should remain vigilant to address the misinformation and “deep fakes” impersonating people and alleged opinions at a rate and with an accuracy we have not yet seen. In all cases, how will we ever know where an idea or an opinion comes from, and whether it is tempered with the human qualities of reasoning, temperance, empathy, and reason?
Have a great day (I am not a robot),
Glenn
From the archives: