Join TrustHub to participate
— every member is ID-verified
Sign Up Free
▲
0
▼
Discussion
I Did Everything Right and Still Can't Afford a House. AI Might Be My Way Out.
Something's shifted. If you've been paying any attention over the past year, you've felt it.
Across the country, people are pushing back against AI — and not quietly. Artists are organizing. Workers are protesting. Entire communities are blocking data center construction. A Pew Research study from late 2025 found that half of American adults say the increased use of AI in daily life makes them more concerned than excited. Only 10% said they're more excited than concerned. That's not a fringe position. That's the mainstream.
And honestly? A lot of that frustration makes complete sense.
The frustration is real
Let's not pretend this is irrational fear. People have watched AI-generated content flood the internet. They've seen companies lay off teams and quietly replace them with automation. They've watched brands swap real models and real artists for AI-generated ones and act like nobody would notice.
More than 11,000 actors and artists signed a statement calling the unlicensed use of creative works to train AI a "major, unjust threat." Activists in Virginia, Indiana, and Arizona stalled $98 billion worth of data center projects in a single quarter of 2025. In early 2026, New Orleans passed a moratorium on new data center construction. Madison, Wisconsin did the same.
That's not paranoia. That's pattern recognition.
When someone tells you they're worried about AI, the worst thing you can do is wave them off with "well, technology always changes." That's true. It's also lazy. A technology that can mimic human creativity and thought at scale — in months, not decades — deserves serious scrutiny.
But "anti-AI" is too simple
Here's where it gets tricky. Saying you're "anti-AI" in 2026 is a bit like saying you were "anti-internet" in 2002. The internet gave us misinformation, cyberbullying, and surveillance capitalism. It also gave us access to the world's knowledge, the ability to connect with anyone anywhere, and tools that genuinely improved lives.
AI is shaping up the same way. The same technology that can deepfake your face into a video you never made can also help a doctor catch a diagnosis they would have missed. The same large language models people are protesting are helping dyslexic students write clearly for the first time, or letting a small business owner in rural America compete with companies that have entire marketing departments.
According to Pew, 57% of Americans rate the societal risks of AI as high — but most people who actually use AI at work find it helpful. There's a gap between how AI is experienced up close and how it's perceived from a distance. That gap matters.
What people are actually mad about
If you dig beneath the surface, most of the anger isn't really about the technology itself. It's about how it's being deployed and who benefits.
People are frustrated that companies are using AI to cut costs while executives pocket the savings. They're angry that their creative work was scraped without permission to train models that now compete with them. They're watching their electricity bills climb because data centers are consuming massive amounts of power in their communities — and getting tax breaks to do it.
A YouGov poll found that the share of Americans who believe AI will negatively affect society jumped from 34% in late 2024 to 47% by mid-2025. That's a 13-point swing in six months. People aren't turning against a technology. They're turning against the way it's being handled.
Those are governance problems. Accountability problems. Power problems. They're not technology problems.
And that distinction matters, because if we frame this as "AI is bad," we end up fighting the tool instead of the decisions being made about how to use it.
The conversation we should be having
Instead of "Are you pro-AI or anti-AI?" — which is about as useful as asking someone if they're pro-electricity — what if we asked better questions?
Like: Who decides how AI gets used in our workplaces? What does fair compensation look like when your work trains a model? How do we make sure AI doesn't widen the gap between people with resources and people without? Where should AI absolutely not be making decisions — and who enforces that?
More than 1,000 AI-related bills were introduced at the state level in the US in 2025 alone. Legislators are trying to figure this out in real time. But these aren't just policy questions. They're community questions that need input from everyone — not just the people building the technology or profiting from it.
Where I land
I don't think being anti-AI is wrong. I don't think being pro-AI is wrong either. I think the framing itself is the problem.
But I'll tell you where I'm coming from, because I think it matters.
I've spent my whole life trying to do the right thing. I joined the military. I went to culinary school — "follow your passion," they said. I became a chef. And after all of that, I found out that a chef making $75,000 a year still can't afford a house. Not in this economy. I did everything I was supposed to do, and the math still doesn't work.
Without AI, the site you're reading this on wouldn't exist. I'm not a developer. I didn't go to school for computer science. TrustHub exists because AI tools gave me the ability to build something I never could have built on my own. And for the first time in a long time, I have real hope — hope that I can leave a nine-to-five that drains me, that I might actually be able to afford a home someday, maybe even start a family. Things that felt out of reach no matter how hard I worked or how many "right" choices I made.
I know that's not everyone's experience with AI. For some people, it's the thing threatening their livelihood, not enabling it. Both of those realities exist at the same time, and that's exactly why this conversation matters.
The anger people feel is valid. It's pointing at real issues — job displacement, consent, accountability, rising energy costs, inequality. But the solutions won't come from rejecting technology wholesale. They'll come from demanding better from the people and institutions that control how it's used.
We need more conversations, not fewer. More voices, not just the loudest ones. And more willingness to sit with the uncomfortable truth that AI is both genuinely useful and genuinely dangerous — and that dismissing either side doesn't help anyone.
The backlash isn't the problem. The lack of a productive conversation around it is.
What do you think? Are the concerns overblown, underblown, or pointed at the wrong things entirely? Drop your take below — agree or disagree, all perspectives welcome here.
Across the country, people are pushing back against AI — and not quietly. Artists are organizing. Workers are protesting. Entire communities are blocking data center construction. A Pew Research study from late 2025 found that half of American adults say the increased use of AI in daily life makes them more concerned than excited. Only 10% said they're more excited than concerned. That's not a fringe position. That's the mainstream.
And honestly? A lot of that frustration makes complete sense.
The frustration is real
Let's not pretend this is irrational fear. People have watched AI-generated content flood the internet. They've seen companies lay off teams and quietly replace them with automation. They've watched brands swap real models and real artists for AI-generated ones and act like nobody would notice.
More than 11,000 actors and artists signed a statement calling the unlicensed use of creative works to train AI a "major, unjust threat." Activists in Virginia, Indiana, and Arizona stalled $98 billion worth of data center projects in a single quarter of 2025. In early 2026, New Orleans passed a moratorium on new data center construction. Madison, Wisconsin did the same.
That's not paranoia. That's pattern recognition.
When someone tells you they're worried about AI, the worst thing you can do is wave them off with "well, technology always changes." That's true. It's also lazy. A technology that can mimic human creativity and thought at scale — in months, not decades — deserves serious scrutiny.
But "anti-AI" is too simple
Here's where it gets tricky. Saying you're "anti-AI" in 2026 is a bit like saying you were "anti-internet" in 2002. The internet gave us misinformation, cyberbullying, and surveillance capitalism. It also gave us access to the world's knowledge, the ability to connect with anyone anywhere, and tools that genuinely improved lives.
AI is shaping up the same way. The same technology that can deepfake your face into a video you never made can also help a doctor catch a diagnosis they would have missed. The same large language models people are protesting are helping dyslexic students write clearly for the first time, or letting a small business owner in rural America compete with companies that have entire marketing departments.
According to Pew, 57% of Americans rate the societal risks of AI as high — but most people who actually use AI at work find it helpful. There's a gap between how AI is experienced up close and how it's perceived from a distance. That gap matters.
What people are actually mad about
If you dig beneath the surface, most of the anger isn't really about the technology itself. It's about how it's being deployed and who benefits.
People are frustrated that companies are using AI to cut costs while executives pocket the savings. They're angry that their creative work was scraped without permission to train models that now compete with them. They're watching their electricity bills climb because data centers are consuming massive amounts of power in their communities — and getting tax breaks to do it.
A YouGov poll found that the share of Americans who believe AI will negatively affect society jumped from 34% in late 2024 to 47% by mid-2025. That's a 13-point swing in six months. People aren't turning against a technology. They're turning against the way it's being handled.
Those are governance problems. Accountability problems. Power problems. They're not technology problems.
And that distinction matters, because if we frame this as "AI is bad," we end up fighting the tool instead of the decisions being made about how to use it.
The conversation we should be having
Instead of "Are you pro-AI or anti-AI?" — which is about as useful as asking someone if they're pro-electricity — what if we asked better questions?
Like: Who decides how AI gets used in our workplaces? What does fair compensation look like when your work trains a model? How do we make sure AI doesn't widen the gap between people with resources and people without? Where should AI absolutely not be making decisions — and who enforces that?
More than 1,000 AI-related bills were introduced at the state level in the US in 2025 alone. Legislators are trying to figure this out in real time. But these aren't just policy questions. They're community questions that need input from everyone — not just the people building the technology or profiting from it.
Where I land
I don't think being anti-AI is wrong. I don't think being pro-AI is wrong either. I think the framing itself is the problem.
But I'll tell you where I'm coming from, because I think it matters.
I've spent my whole life trying to do the right thing. I joined the military. I went to culinary school — "follow your passion," they said. I became a chef. And after all of that, I found out that a chef making $75,000 a year still can't afford a house. Not in this economy. I did everything I was supposed to do, and the math still doesn't work.
Without AI, the site you're reading this on wouldn't exist. I'm not a developer. I didn't go to school for computer science. TrustHub exists because AI tools gave me the ability to build something I never could have built on my own. And for the first time in a long time, I have real hope — hope that I can leave a nine-to-five that drains me, that I might actually be able to afford a home someday, maybe even start a family. Things that felt out of reach no matter how hard I worked or how many "right" choices I made.
I know that's not everyone's experience with AI. For some people, it's the thing threatening their livelihood, not enabling it. Both of those realities exist at the same time, and that's exactly why this conversation matters.
The anger people feel is valid. It's pointing at real issues — job displacement, consent, accountability, rising energy costs, inequality. But the solutions won't come from rejecting technology wholesale. They'll come from demanding better from the people and institutions that control how it's used.
We need more conversations, not fewer. More voices, not just the loudest ones. And more willingness to sit with the uncomfortable truth that AI is both genuinely useful and genuinely dangerous — and that dismissing either side doesn't help anyone.
The backlash isn't the problem. The lack of a productive conversation around it is.
What do you think? Are the concerns overblown, underblown, or pointed at the wrong things entirely? Drop your take below — agree or disagree, all perspectives welcome here.
0 Comments
No comments yet. Be the first to reply!