My AI Caught Something a Major Podcast Missed
I use AI to help me write these posts. I'm not hiding that, and I'm not apologizing for it.
Nobody has said to me in those exact words, "I don't trust AI-assisted writing." But I'm not naive. I know the sentiment is out there. Some people see a blog post and immediately assume that if AI touched it, the person behind it didn't do any real work.
I want to tell you about something that happened to me today that flips that assumption on its head.
I was writing a post about Colossal Biosciences — the company working on bringing back dire wolves, woolly mammoths, and the Tasmanian tiger. I first heard about them on Peter Diamandis's podcast, Moonshots. Ben Lamm, Colossal's CEO, was on the show, and it was a great conversation. I walked away thinking this company is doing some of the most exciting work on the planet. So I started putting together my post.
Here's the thing about how I work. My AI is set up to cross-check every factual claim against multiple independent sources before anything gets published. Not a quick Google search — a real verification pass where every stat, every quote, every claim gets checked across at least three sources.
And my AI came back with a flag. It told me there's actually been significant scientific pushback on Colossal's dire wolf claims. The animals aren't actually dire wolves brought back from extinction. They're gray wolves with about 20 gene edits — edits based on ancient dire wolf DNA, but still fundamentally gray wolves. Colossal's own chief scientist acknowledged this. Scientists criticized the way it was presented, and MIT Technology Review put the project on their worst tech flops of 2025 list.
None of that was in the Diamandis episode. A podcast with a massive audience presented the Colossal story as a straight-up de-extinction breakthrough, no asterisks. The way they talked about it, you'd think they pulled a dire wolf out of a time machine.
I still think the underlying technology is incredible and I wrote about that in my other post. But the way it was presented on that podcast was misleading. And my AI caught the gap.
Think about that. The format everyone trusts — a real person speaking on camera in their own voice — missed important context. The format everyone is skeptical of — AI-assisted writing — caught it and corrected for it.
This is what frustrates me about the anti-AI sentiment around writing. People assume that because a YouTuber or podcaster is speaking naturally, their information must be more reliable than someone who uses AI tools to research and verify. But why? Being charismatic on camera doesn't mean you did the homework. And using AI to help you write doesn't mean you skipped it.
I'm not saying podcasts are bad. I actually think YouTube and podcasting have been a massive upgrade over traditional media, which has basically proven itself to be compromised at this point. Independent creators are doing important work. But presenting something in a podcast format doesn't automatically make it more accurate than a blog post. And just because you can see the person's face doesn't mean every word they say has been verified.
For anyone skeptical about the way I write: look at the person, not the tool.
Look at whether they're transparent about their sources. Look at whether they acknowledge when a story has more than one side. Look at whether they include information that complicates their own argument — because I included the Colossal criticism even though I think the company is doing cool work. That's what tells you whether someone is trustworthy.
Google can be wrong. AI can be wrong. A podcast host can leave out context that changes the whole story. It all comes down to how much effort you put into verifying what you put out there. I take that seriously, and I'll put my process up against anyone's.
Sources: Peter Diamandis Moonshots Podcast (EP #165), MIT Technology Review, CNN, Nature
0 Comments
No comments yet. Be the first to reply!