Join TrustHub to participate
— every member is ID-verified
Sign Up Free
▲
0
▼
Discussion
AI Is Reshaping Cybersecurity Jobs. Is That Good or Bad for Workers?
The Department of Homeland Security is partially shut down. Iran-linked hackers are ramping up attacks on US infrastructure. And somewhere in San Francisco, a startup is pitching AI tools that can find and patch vulnerabilities in seconds.
All three of those things happened in the last 48 hours. Taken together, they paint a picture of an AI cybersecurity industry at a strange inflection point — where the same technology making attacks more dangerous is also making defense more accessible. And nobody can agree on whether that adds up to good news or bad for the people who work in cybersecurity.
The Attack Surface Is Getting Wider
The Iran-linked cyberattacks hitting US systems right now are a reminder that the threat landscape is not theoretical. Critical infrastructure — power grids, water systems, financial networks — is in the crosshairs. And the tools those attackers are using are getting more sophisticated by the month, because AI makes it cheaper and faster to probe for weaknesses, craft convincing phishing messages, and scale attacks that used to require entire teams.
RSA Conference, the big annual cybersecurity gathering, spent a lot of time this year talking about exactly this. The consensus among the CEOs who spoke: AI is a massive force multiplier for attackers. What used to take a skilled hacker weeks can now be partially automated.
But the Defenders Are Using It Too
Here is the other half of the picture. The same AI tools that let hackers attack at scale are the ones letting security teams respond at scale. AI can monitor network traffic for anomalies, flag phishing attempts, and even suggest patches — all in real time, without waiting for a human to spot the problem.
This matters because there is a well-documented shortage of cybersecurity professionals. Estimates put the global gap at somewhere between three and four million unfilled positions. AI cannot fill that gap on its own, but one analyst armed with good AI tools can now do the work that used to require a small team. For an industry that has been screaming about being understaffed, that is a genuine relief.
So Does It Just Cancel Out?
That was the question sitting in my head after reading through this week's cybersecurity jobs coverage. If attackers get AI and defenders get AI — is not that just a wash?
Maybe. But I keep coming back to the labor market angle, because I think that is actually the more interesting long-term question.
Right now, if you are a mid-career cybersecurity professional — you have been doing this for ten years, you have your certs, you are pulling in $130-150K — AI tools are probably making you significantly more productive. Good news for your employer. But also: why does a company need five of you when one of you with the right AI setup can cover more ground?
There is a version of this where AI compresses the upper end of the market. The rockstar analysts might see their output double or triple — but the market for "good enough" security work might shrink, because more of it gets automated. That could mean higher pay at the top and more opportunity at the entry level, where AI assistants make it easier to get up to speed. Or it could mean an industry that employed a lot of people at solid middle-class salaries quietly becomes an industry that employs fewer of them, just more efficiently.
The "Vibe Security" Question
This is where it gets genuinely weird — and honestly a little uncomfortable to think about.
If "vibe coding" is real — if you can hire someone who does not know how to code but knows how to talk to AI tools well enough to build something functional — is "vibe security" also a thing? Can someone who understands attack patterns, knows what to look for, and knows how to prompt an AI security tool do meaningful cybersecurity work without traditional training?
I do not know the answer. But I think the answer matters a lot.
If yes — if AI really does lower the barrier enough that someone with good instincts and AI tools can contribute to a security team — then the talent shortage shrinks dramatically. More people can do the work. More organizations can afford basic security coverage. That is probably good for most businesses and most consumers.
If no — if the genuinely hard parts of cybersecurity still require deep expertise that cannot be replaced by a good AI prompt — then AI makes the experts even more valuable, and everyone else is just along for the ride.
Where That Leaves Me
I keep coming back to the spreadsheet analogy. When spreadsheets got good, everyone said accountants were going to disappear. They did not disappear — but the job changed. A lot of number-crunching got automated. The people who adapted and learned to work with the tools ended up more productive. The ones who did not had a harder time.
I think cybersecurity is in a similar transition right now. The attackers are not going to stop. The talent shortage is not going to magically resolve. AI is going to keep getting better on both sides of the equation.
Whether that ends up being a net positive probably depends less on the technology and more on whether the industry — and the people in it — figures out how to make the transition work for the people who are actually doing the work.
What do you think? Is AI going to make cybersecurity better, or just cheaper? And does "cheaper" have to mean worse?
Sources: RSA Conference 2026 CEO panel (CRN) | ABC News — Iran-linked cyberattacks coverage | Morgan Stanley cybersecurity equity research | DHS/CISA shutdown status reporting
All three of those things happened in the last 48 hours. Taken together, they paint a picture of an AI cybersecurity industry at a strange inflection point — where the same technology making attacks more dangerous is also making defense more accessible. And nobody can agree on whether that adds up to good news or bad for the people who work in cybersecurity.
The Attack Surface Is Getting Wider
The Iran-linked cyberattacks hitting US systems right now are a reminder that the threat landscape is not theoretical. Critical infrastructure — power grids, water systems, financial networks — is in the crosshairs. And the tools those attackers are using are getting more sophisticated by the month, because AI makes it cheaper and faster to probe for weaknesses, craft convincing phishing messages, and scale attacks that used to require entire teams.
RSA Conference, the big annual cybersecurity gathering, spent a lot of time this year talking about exactly this. The consensus among the CEOs who spoke: AI is a massive force multiplier for attackers. What used to take a skilled hacker weeks can now be partially automated.
But the Defenders Are Using It Too
Here is the other half of the picture. The same AI tools that let hackers attack at scale are the ones letting security teams respond at scale. AI can monitor network traffic for anomalies, flag phishing attempts, and even suggest patches — all in real time, without waiting for a human to spot the problem.
This matters because there is a well-documented shortage of cybersecurity professionals. Estimates put the global gap at somewhere between three and four million unfilled positions. AI cannot fill that gap on its own, but one analyst armed with good AI tools can now do the work that used to require a small team. For an industry that has been screaming about being understaffed, that is a genuine relief.
So Does It Just Cancel Out?
That was the question sitting in my head after reading through this week's cybersecurity jobs coverage. If attackers get AI and defenders get AI — is not that just a wash?
Maybe. But I keep coming back to the labor market angle, because I think that is actually the more interesting long-term question.
Right now, if you are a mid-career cybersecurity professional — you have been doing this for ten years, you have your certs, you are pulling in $130-150K — AI tools are probably making you significantly more productive. Good news for your employer. But also: why does a company need five of you when one of you with the right AI setup can cover more ground?
There is a version of this where AI compresses the upper end of the market. The rockstar analysts might see their output double or triple — but the market for "good enough" security work might shrink, because more of it gets automated. That could mean higher pay at the top and more opportunity at the entry level, where AI assistants make it easier to get up to speed. Or it could mean an industry that employed a lot of people at solid middle-class salaries quietly becomes an industry that employs fewer of them, just more efficiently.
The "Vibe Security" Question
This is where it gets genuinely weird — and honestly a little uncomfortable to think about.
If "vibe coding" is real — if you can hire someone who does not know how to code but knows how to talk to AI tools well enough to build something functional — is "vibe security" also a thing? Can someone who understands attack patterns, knows what to look for, and knows how to prompt an AI security tool do meaningful cybersecurity work without traditional training?
I do not know the answer. But I think the answer matters a lot.
If yes — if AI really does lower the barrier enough that someone with good instincts and AI tools can contribute to a security team — then the talent shortage shrinks dramatically. More people can do the work. More organizations can afford basic security coverage. That is probably good for most businesses and most consumers.
If no — if the genuinely hard parts of cybersecurity still require deep expertise that cannot be replaced by a good AI prompt — then AI makes the experts even more valuable, and everyone else is just along for the ride.
Where That Leaves Me
I keep coming back to the spreadsheet analogy. When spreadsheets got good, everyone said accountants were going to disappear. They did not disappear — but the job changed. A lot of number-crunching got automated. The people who adapted and learned to work with the tools ended up more productive. The ones who did not had a harder time.
I think cybersecurity is in a similar transition right now. The attackers are not going to stop. The talent shortage is not going to magically resolve. AI is going to keep getting better on both sides of the equation.
Whether that ends up being a net positive probably depends less on the technology and more on whether the industry — and the people in it — figures out how to make the transition work for the people who are actually doing the work.
What do you think? Is AI going to make cybersecurity better, or just cheaper? And does "cheaper" have to mean worse?
Sources: RSA Conference 2026 CEO panel (CRN) | ABC News — Iran-linked cyberattacks coverage | Morgan Stanley cybersecurity equity research | DHS/CISA shutdown status reporting
0 Comments
No comments yet. Be the first to reply!