
If you’ve ever heard the expression ‘sometimes it’s best to get out of your own way,’ it might as well have been coined for cybersecurity.
According to IBM, 95 percent of data breaches involve human error, while just 8 percent of employees account for 80 percent of cybersecurity incidents. As artificial intelligence (AI) continues to reshape the cybersecurity landscape, the question for managed service providers (MSPs) isn’t whether AI can help—it’s whether technology can finally overcome our most persistent vulnerability: ourselves.
Despite organizations spending billions to strengthen technology stacks, breaches continue unabated, mostly due to human error. With cybercrime projected to cost $10.5 trillion this year and 54 percent of employees falling for phishing scams because they perceived emails as legitimate, the stakes keep getting higher.
But with AI tools as part of an MSP’s everyday arsenal, the question becomes: can AI tools ever completely eliminate human risk from cybersecurity? Industry experts say probably not.
Understanding AI’s limitations
Herb Hogue, Chief Tech, Solutions, and Innovation Officer at cybersecurity consultancy Myriad360, tells SmarterMSP.com that AI tools will probably never be able to eliminate all cybersecurity risks. “AI can definitely help reduce certain risks like spotting threats early before humans get involved, but completely eliminating risk is unlikely,” he explains. Current use of technology still heavily depends on people, so things like user errors, lack of training, and poor understanding of tools will continue to be issues.
“Plus, we’re entering a phase where AI isn’t just used for defense—it’s also being used by attackers. For example, criminals are already using AI-generated voices to trick people and bypass security systems at banks,” Hogue notes.
Traditionally, organizations have responded to human error with more training and awareness programs, but as Hogue points out, “It’s not enough. As AI becomes part of everyday life, people, especially those who are most vulnerable or less tech-savvy, will need better awareness and practical tools to protect themselves. That’s exactly who bad actors are targeting, and without proper support, those users will continue to be the weak point.”
The persistent email problem
Email remains the biggest weak spot in cybersecurity defenses, and AI is making the challenge more complex rather than simpler. “With AI, it’s becoming harder to tell if a message is real or a scam, because fake content looks so convincing now. While tools are improving to fight back, we’re essentially in an arms race of good AI vs. bad AI,” Hogue observes.
This technological arms race creates a moving target for security teams. Just as defensive AI tools become more sophisticated at detecting threats, attackers leverage similar technology to create more convincing phishing attempts, deepfakes, and social engineering campaigns.
Beyond individual training: A systemic approach
Jeff Le, principal and founder of consultancy 100 Mile Strategies, agrees that AI tools will likely never eliminate risk entirely. “At its core, humans still lead organizations and represent the biggest vulnerability to security and organizational integrity,” Le explains.
While user training and awareness remain important, Le emphasizes that cybersecurity practices and standards must become everybody’s problem—essential at all units from front-line employees to the executive and C-suite levels. “Another area of emphasis must be on holistic cybersecurity and AI technology implementation planning. This is important for both strategic planning and compliance, especially as the EU AI Act takes shape.”
This systemic approach recognizes that cybersecurity isn’t just an IT department responsibility but requires organizational commitment at every level.
Building human-centered defense
Daniel Tobok, CEO of consultancy CYPFER and a 30-year veteran of the cybersecurity industry, offers SmarterMSP.com a different perspective on the human element. While humans often cause breaches, it’s also the human element that can prevent them, beginning with strong awareness programs.
“Strong awareness programs are not about fear. They are about recognition, clarity, and confidence. Employees need to know how to spot a phishing email, question a suspicious link, and report an incident quickly—without hesitation or embarrassment,” Tobok explains.
Tobok identifies key elements that make awareness programs effective:
- Real-world relevance means using examples from actual breaches, walking through the anatomy of malicious links, and showing how attackers impersonate brands, vendors, and even internal executives.
- Targeted sessions for high-risk roles recognize that executives, finance, HR, and legal teams are frequent targets who need focused sessions that go beyond the basics to address specific tactics used against them.
- Interactive delivery acknowledges that passive learning doesn’t stick. Effective programs use tabletop-style simulations, short-form videos, quizzes, and interactive scenarios to drive engagement and retention.
- Frequent reinforcement ensures training isn’t a once-a-year exercise. Ongoing touchpoints through newsletters, phishing simulations, and scenario-based reminders help keep security awareness top of mind.
- Clear incident response processes give employees confidence in knowing how and where to report suspected threats. Clarity around what happens next encourages action rather than silence.
Tobok concludes that the human element will always play a critical role in cybersecurity. “AI tools can never replace the human element.” The question isn’t whether we can eliminate human risk. It’s how we can better prepare our people to be the strongest link in our cybersecurity chain rather than the weakest.
As AI continues to evolve, MSPs must prioritize human-centered strategies to stay ahead of cyber threats.
Photo: apops / Shutterstock
This post originally appeared on Smarter MSP.