“We also know how cruel the truth often is, and we wonder whether delusion is not more consoling.” – Henri Poincare

Oh, delusion is far more consoling than inconvenient truths and cruel realities! I’ve given into more than a few delusions in my day. It’s a self-defense mechanism, and I likely wouldn’t be here without indulging in more than a few delusions, some harmless, many harmful to my own integrity and self-worth.

This Poincare quote that Sagan referenced as a lead-in for Chapter 22 of his book The Demon-Haunted World ties perfectly it into a YouTube video I watched recently from the YouTube channel Foolish Bailey about Facebook A.I. Slop (Starring: Your Favorite Athlete!)

This video brilliantly exposes how AI slop—low‑quality, high‑volume AI-generated content—is flooding platforms with fake celebrity headlines, especially targeting American audiences for ad clicks. These posts aren’t just silly gossip: they’re designed to evoke strong emotions, and that emotional pull is currency. People don’t engage because they believe—it’s because it triggers outrage, humor, or disgust.

Basically, the story here is that this Vietnamese website is generating these fake “Breaking” stories about American athletes to get engagement with these Facebook pages and hopefully generate revenue from social platform engagement and clicks through to their ad-supported website. Bailey says this is another prime example of our “post-truth” society, where objective truth goes out the window when people are trying to make a living, especially in a country like Vietnam where the GDP is something like one-fifteenth that of the US. Why not steal ad money from Americans, right?

Most of the fake news stories are outrageous claims and mostly harmless, but being patently untrue is the real issue. I’m also not a fan of the anti-Pride stories featured on these particular Facebook ‘fan’ pages, and while Bailey refuses to take a stance on the matter itself, he does admit that these are among the AI fake news stories that generate the most engagement.

Bailey nails it: this isn’t just an info problem. It’s a cultural shift. We’ve moved into a space where how we feel trumps whether it’s factual. That slop taps into our biases for likes and shares, and even if readers know it’s fake, the engagement reward is addicting. Academic research backs this up. Echo chambers reinforce what we want to believe, intensifying polarization. Social engagement metrics (likes/shares) make us more vulnerable to misinformation. We’re dealing with epistemic dysfunction—not just distraction.

The stakes here are high. The AP News warns that conspiracies fueled by AI and social‑engineered content have become a clear danger to democracy, fracturing trust in authority and causing real-world harm. Meanwhile, a New Yorker piece argues that the very idea of shared, objective truth is eroding, threatening democratic stability.

So, Are We Cooked?

“Candid cynicism” seems fair. But we’re not entirely cooked. Recognizing the problem is the first step: we need platform accountability – algorithms optimized for engagement need rethinking. Like Sagan suggested way back in the mid-1990’s, we also need to double (or even triple) down on media literacy – teaching people how to verify, question, and research. (Bailey brought up this very point, too, which isn’t surprising as he does hold a journalism degree.)

Also, another point that Sagan made in The Demon-Haunted World, people need better incentives for fact-checking and source verification; these are actions which should be rewarded, not hidden. Yes, some reforms may be underway, as ‘AI babysitters’ are becoming a cottage industry it seems, but the challenge is massive.

It may be useful and even productive to speculate: can interventions in education and potential algorithm changes restore some shared reality?


Leave a Reply

Your email address will not be published. Required fields are marked *