Press Release

AI is BS

By: Brian D. Wassom

As of this writing, society appears to be somewhere around the fourth-date phase of its relationship with generative artificial intelligence: still infatuated, but beginning to sober up enough to acknowledge that the object of our affections may not prove to be entirely worthy of our trust. All but the most ardent of enthusiasts have come to realize that we can’t take for granted that everything AI applications tell us is true. Darling-of-the-moment ChatGPT, for example, unblushingly delivers fact and fiction with equal gusto in response to our queries. The prevailing description of these flights of fancy is “hallucination”—an anthropomorphizing term that implies the app truly desires to be honest with us, but is inadvertently held back by an occasional schizophrenic break with reality.

To the extent this terminology conveys that ChatGPT is not intentionally lying to us, it is fair enough. But to the extent the word “hallucination” implies that the app is otherwise telling the truth, it is materially misleading. Of course, to speak of AI as if it is something more personal than the complex interaction of 1s and 0s is our first mistake, and its own brand of fiction. A quote attributed to Stephen Hawking defines “intelligence” as “the ability to adapt to change.” By that measure, AI programs act intelligently. The very thing that sets AI apart from garden-variety software is its ability to adapt the way it functions in response to the accumulated feedback it receives from prior inputs and actions. We call these processes “machine learning” or “deep learning,” and the more advanced AI designs are known as “neural nets”—all terms patterned after the way human minds operate.

But these terms are more analogical than accurate. AI is not—cannot be—“intelligent” in the same way a human brain is, because software lacks a mind. Merriam Webster more thoroughly defines “intelligence” as “the ability to learn or understand,” “the skilled use of reason,” and “the ability to apply knowledge … or to think abstractly.” Machines can do none of this. If an AI program generates data that happens to correspond to reality, that is the happy result of its human coder’s effort. These programmers are like civil engineers who design a complex array of subterranean pipes precisely enough to ensure that water flows only to the desired location. But an AI program does not know it is telling the truth any more than a pipe knows it is delivering water correctly, because it has no mind with which to ascertain what reality is.

We can at least take comfort in the fact that AI cannot lie to us. A liar is one who recognizes the truth and chooses to deceive the listener into believing something different. AI cannot “know” the truth, so it cannot lie.

But neither can it “hallucinate,” strictly speaking; to hallucinate is still to perceive a reality, just not the actual one. So if—as seems inevitable—we’re going to continue speaking of AI in anthropomorphized terms, we should at least be more precise. Lulling ourselves into a mistaken understanding of how the software functions sets us up to misunderstand what it does for us, and thus to miss out on both its limitations and its true potential utilities. To that end, I propose that the most accurate way to describe the output of generative AI programs is—to use the abbreviation—“BS.”

I’m serious. In a 2005 book that became his most popular publication, the now-94-year-old American philosopher Harry G. Frankfurt set out to define this oft-used but ill-understood term. Titled On Bullsh*t, the book posits that “the essence of [BS] is not that it is false but that it is phony.” Unlike a liar, who is actively “attempting to lead us away from a correct apprehension of reality,” the BS artist “does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.”

That is as close as we can come to describing ChatGPT’s conversations with us. The only difference between a chatbot and a human BS artist is that the person does it intentionally. AI has no intention. But that just cements the description. As Frankfurt says, “[BS] is unavoidable whenever circumstances require someone to talk without knowing what he is talking about.” Since ChatGPT cannot know what it is talking about, it cannot say anything other than BS.

This is easiest to see in the context of a chatbot, but the same principle applies to all manifestations of AI. Other AI programs have been known to make health care treatment recommendations based on inaccurate conclusions drawn from incomplete data. Or employment decisions based on data sets that were skewed by prior discrimination. Or to identify criminal suspects based on facial recognition data that reads faces of color more poorly than white faces. Or to autocorrect words that turn an innocent text message into a mortifying faux pas.

None of these outcomes were inevitable, nor were they intrinsically the fault of the programs themselves. The inaccuracies in AI output can, and often will, be reduced by humans implementing more refined code—more sophisticated pipes to make sure the water goes only where it is supposed to and nowhere else. That said, much of the wonder and worry in many of the most advanced AI programs is that their human authors don’t know how they work; the AI itself has long since rewritten its own code so many times that its function is inscrutable even to its creators. BS begat by BS.

But that is not to say that we should not continue to develop and use AI applications. To the contrary, there are brilliant people developing useful, ethical implementations of AI. The power and utility of such inventions cannot be denied, so long as they remain purpose-built tools and not trusted advisors. AI solutions must be implemented within guardrails restrictive enough to ensure that the program’s final output is sufficiently likely to correlate to reality as to be useful (after human-led review and revision) for its intended purpose. The acceptable parameters will vary by context. And sometimes a little BS can be a good thing—such as when using ChatGPT to brainstorm how to make a meeting sound interesting, or relying on Midjourney to render amazing and inspiring images, even if the people in them end up with a few extra fingers.

But the breakneck speed of AI’s adoption across virtually all economic sectors at once, coupled with the breathless awe with which salespeople and mid-level managers speak of these apps’ potential, remains a cause for serious concern. Even if portions of our societal consciousness knows better than to trust AI, there are still far too many of us being seduced by its pillow talk.

AI-powered tools will be useful for an increasingly broad array of applications that inform and assist human decision-making, but they must always be wielded by humans exercising independent judgment. “The [BS artist] … does not reject the authority of truth, as the liar does …. He pays no attention to it at all. By virtue of this, [BS] is a greater enemy of the truth than lies are,” Frankfurt warned. To the extent we allow ourselves or our businesses to uncritically rely on generative AI as a source of truth, we will come to regret it.