Computer AF 22: AI Rot, RoboTaxis, and the Digital Fog We’re Wading Through

If it feels like the internet is getting dumber, more chaotic, and somehow harder to trust, you might not be imagining it. And no, it’s not TikTok this time. It’s AI.

In the latest episode of Computer AF, a tech podcast that leans into the absurdity of our digital age, Anne and John take a scalpel to the current state of artificial intelligence, autonomous systems, and the blurry, often contradictory culture forming around both. The result is a conversation that peels back the overhype of innovation and asks a much simpler question: what exactly are we building, and who is paying attention?

The AI Brain Rot Discourse Is Getting Louder

A recent MIT study sparked the episode’s central discussion: does using generative AI dull the brain? The study, while limited in size and not yet peer-reviewed, compared groups of students writing SAT-style essays with and without AI tools and measured differences in neural engagement. As expected, the group relying on ChatGPT showed less brain activity.

The bigger takeaway isn’t that AI causes cognitive decline — there’s no actual science to support that claim yet. But it does reinforce the real concern about overreliance on AI tools and how it might chip away at our cognitive effort. The idea of “AI brain rot” might be more cultural slang than medical diagnosis, but it signals something real. It’s not the tool that’s the problem. No, it’s how we choose to use it.

Like most digital media cycles, the fear of AI-induced laziness mirrors past moral panics: video games, TikTok, television, and the internet itself have all taken turns being blamed for dulling society’s edge. But there’s a meaningful distinction here. Unlike those earlier boogeymen, generative AI often completes tasks for us, which raises the stakes on what we might be offloading from our own minds.

“AI Slop” and the Compounding Misinformation Loop

One of the more damning phrases of the episode, AI slop, refers to the glut of low-quality, AI-generated content now cluttering the internet. From search engine chum to social media hallucinations, the volume of machine-written content is exploding, much of it riddled with subtle inaccuracies or flat-out falsehoods. And it’s being indexed, amplified, and fed back into new models.

This feedback loop is beginning to reshape what it means to even find reliable information online. Garbage in, garbage out. But at AI scale, it’s garbage in, garbage everywhere.

Part of the concern stems from where certain large language models pull their data. Grok, the LLM deployed by Elon Musk’s X, is trained largely on the site’s own firehose of user-generated content. That raises questions about bias, accuracy, and stability, especially when the platform is already plagued by bots, political slants, and algorithmic noise.

While Grok may offer certain advantages for developers, its social commentary, often erratic or offensive, has made it difficult to trust. When a model trained on online chaos tries to summarize the chaos, things tend to go sideways.

RoboTaxis for Teens: What Could Possibly Go Wrong?

Another story dissected on the show was Waymo’s rollout of teen accounts, allowing kids as young as 14 to ride alone in its driverless vehicles in Phoenix. It’s a calculated bet on the future of autonomy and on parents’ willingness to trust it.

There’s no denying the efficiency. But as the hosts pointed out, putting teens alone in a car with no adult supervision and no driver could quickly turn into a predictable cocktail of smoking, drinking, and awkward social experimentation. It’s less a knock on the tech, and more a reality check on how humans, especially teenagers, use autonomy when no one’s watching.

The broader tension here is about the shifting thresholds of trust in machine-driven systems. As automation expands, who’s really in control, and are we outsourcing too much, too fast?

Tech Literacy, Paywalls, and the Fight for Signal in the Noise

The conversation eventually pivoted to a more grounded question: how do we actually know what’s true anymore?

With AI-written articles on the rise, paywalls blocking quality journalism, and social media blurring the line between reporting and opinion, it’s never been harder — or more important — to curate a clean information diet. The consensus? You have to want to know. You have to actively seek signal instead of passively absorbing noise.

That means paying for news when possible. It means using fact-checking tools and browser plugins. And above all, it means not relying on social media feeds to form your worldview.

The information environment isn’t hopeless, but it is polluted. And cleaning it up will take more than filters and firewalls. It requires a cultural shift in how we consume, question, and verify.

Want more Computer AF? Check out the other articles or watch the show on YouTube.

Computer AF is a tech-focused show featuring the genius combination of the Anne Ahola Ward and John Boitnott. Enough said.

Leave a Reply

Your email address will not be published. Required fields are marked *