
As language learning models (LLMs) become more prevalent, the cyber dangers posed by DeepSeek, ChatGPT, Claude, and others are just beginning to be understood.
Jacob Anderson, owner of Beyond Ordinary, states that cyber personnel are overworked and continue to have more work added to their backlog every day. “To ease that burden, they are turning to these LLM services to get good advice on how to react to anomalous activity, or the mundane daily operation of configuration management and remote control.” Still, he adds that as these personnel use the tools, they will start to trust them more and rely upon the “expertise” of the tool more.
“The downstream consequence of this trust is a blinding to errors or misinformation. We saw at Apple WWDC that as the LLMs are presented with more complex problems their decision ability becomes less reliable,” Anderson warns, noting that with human nature being trusting, a lot will start to forgive the blatant errors from the LLMs during the most critical response periods when the human is in desperate need of actionable support in a dire situation. Such incidents might include the deployment of ransomware, a critical hardware failure, or a random software configuration snafu. “This is a description of ‘death by a thousand cuts. It is the death of trust that happens slowly and without notice when each cut of doubt goes unnoticed or accepted as nuance.”
Warning of escalating AI threats
Engineering strategist and artificial intelligence (AI) expert Vinod Goje shares the concern.
“We’re sleepwalking into the most complex threat landscape since the dawn of cloud computing,” Goje says, adding that he has seen teams roll out LLMs like they’re toys. “Meanwhile, attackers are turning them into tools for espionage and data exfiltration. DeepSeek didn’t just fail a few safety checks; it failed all of them.” He adds that combining this with copy-paste injection, poisoned training pipelines, and hijacked API keys creates a recipe for disaster.
“The problem is, traditional security playbooks don’t work here. You can’t just firewall your way out of a language-based exploit. Suppose enterprises don’t start treating these models as high-risk assets. In that case, they’re going to learn the hard way that helpful assistants can also be very sophisticated threats,” Goje highlights, advising cyber-professionals to follow this three-point plan:
- Organizations must implement AI-specific penetration testing now. Traditional security assessments miss LLM vulnerabilities.
- CISA’s zero-trust approach for AI assumes that breaches are inevitable, shifting the focus from prevention to rapid detection.
- Managed service providers (MSPs) require behavioral monitoring systems that detect subtle model manipulation attempts, as attackers aren’t breaking in; they’re walking through the front door disguised as legitimate conversations.
LLMs, hacking, and malware distribution
Meanwhile, AI expert and CodeSpy founder Raj Dandage says the most significant threat posed by any LLM/LRM is hacking and malware distribution. “These models aggregate much of the knowledge on the Internet during training. With tool usage (which is mostly unconstrained by model providers), they have access to any other system they need.”
“Imagine being able to try every exploit in the CVE database in seconds. Worse, imagine being able to scan every single open-source project for zero-day vulnerabilities such as buffer overruns. That’s the potential that these models have,” Dandage says. He adds that with new technologies like MCP, which allow external LLMs to interact with a desktop computer, a rogue model can cause significant damage. “For businesses trying to implement an open-source model in-house, the risk is especially high. While models like ChatGPT have a level of risk mitigation built in, open source LLMs are often free of any of these restrictions.”
Dandage introduces another three-step strategy to address these emerging risks:
- Block MCP and other tool usage to prevent unauthorized access and potential security breaches.
- Regularly update your systems and software to address known vulnerabilities and enhance security.
- Adhere to established security best practices to fortify your organization’s defenses against evolving threats.
The privacy risks of LLMs
Dandage also points out indirect threats as a problem.
“Popular LLMs receive terabytes of private information daily, with very few controls on this information. Business users enter various trade secrets and private credentials into prompts to save time on projects. While major LLM providers implement vetted data usage policies, smaller AI developers often lack such safeguards,” Dandage notes. He adds that not properly controlling the data that users put into LLMs can be as bad as leaving a corporate laptop in public with no security.
“And the risk is even greater in regulated industries. For example, we have seen users in medical institutions put patient information into LLMs. This can expose the business to major legal issues,” Dandage explains. He also shares that to mitigate indirect threats, being aware of the terms and policies of any LLMs used in the organization is key. “Businesses must implement mandatory training programs that educate employees on the types of information appropriate for sharing with AI models.”
As organizations increasingly integrate LLMs into their operations, the cybersecurity landscape is evolving rapidly. Implementing AI-specific penetration testing, adopting zero-trust frameworks, and educating staff on secure AI usage are essential steps in safeguarding against potential vulnerabilities. By recognizing and mitigating these risks, businesses can harness the benefits of LLMs while maintaining robust cybersecurity defenses.
Photo: antoniodiaz / Shutterstock
This post originally appeared on Smarter MSP.