Unmasking the next wave: LLMs and evolving cyberthreats

LLMs

LLMsThis week, we bring you additional interviews with experts on the cybersecurity dangers posed by large language models (LLMs) such as Claude, DeepSeek, and ChatGPT. With the rapid rise in the use of these services comes an increasing set of cybersecurity risks that MSPs must continually address. Additionally, artificial intelligence (AI) constantly evolves, with developers regularly updating new models and versions. Therefore, this is a topic that SmarterMSP.com will continually monitor. Barracuda Networks recently conducted a two-part in-depth analysis of some of the security challenges.

The security challenge partly stems from the sheer speed at which users have adopted LLMs. For instance, more than one-third of employees globally used ChatGPT in their work as of December 2024, and this number is expected to continue growing. ChatGPT dominates global workplaces with over 75 percent adoption, making it the clear leader among workplace AI tools.

Data leaks and AI-driven scams

“AI tools like Claude, ChatGPT, and DeepSeek are everywhere now, but they can quietly open the door to cybersecurity problems that many teams don’t see coming,” warns Sal Mohommed, Founder & CEO of AI company LangSync.

Mohommed shares that one primary concern is people copying and pasting sensitive information into public AI chats. “Even if the company behind the AI says it won’t store that data, you’ve already sent it out into the world. That’s a risk.” He goes on to add that another cybersecurity issue presented by LLMs is the possibility of very believable scams being easily fallen for. “AI makes it easy to write emails or messages that sound believable. Hackers are using it to impersonate people, mimic writing styles, or even pull small details from social media to sound more convincing.”

His company, LangSync, teamed with a fintech client to mitigate LLM security threats. “Instead of using public AI tools, we helped them set up a private, in-house chatbot connected only to their own data. We also added tracking to spot if anyone tries to input sensitive details. Now they use AI, but with clear guardrails.” 

Mohommed adds that organizations can take simple steps to mitigate some of the cybersecurity risks associated with LLMs. “Train your teams on what not to share with AI, the same way you train them about phishing. Watch for patterns, like unusual traffic to AI tools. And when possible, use secure AI systems that keep your data locked inside your network.” He adds that AI is helpful, but like any tool, it needs a safety plan.

MSPs in education face unique LLM security challenges

For MSPs in the education vertical, the risks are even more pronounced.

Mark Friend, Director at Classroom365, leads a team supporting ICT and network infrastructure in schools across the UK. His firsthand experience provides valuable insight into the security challenges MSPs face when it comes to LLMs. “So much of my time is spent dealing with the real-world consequences of premature tech adoption, particularly where it intersects with safeguarding, web blocking, and data handling. We’ve had more than one situation where ChatGPT wasn’t the productivity tool it was intended to be. It was a discipline issue waiting to happen,” Friend explains. He also emphasizes the significant risks LLMs present in the educational sector.

“We’ve seen staff rewriting IEPs on unapproved platforms, students using prompts to bypass content filters, and a lack of clarity around whether tools like Claude or DeepSeek fall under GDPR data-sharing regulations,” Friend goes on to note, adding that most of these incidents go unrecorded.

“These activities are happening covertly, through home devices or personal accounts, while IT and senior leadership teams are still catching up with policy cycles,” he adds, noting that CISA and MSP guidelines often don’t reach the school ecosystem in time. “Our security perimeter typically includes managed Wi-Fi, desktop lockdowns, and filters like Smoothwall or Securly. But once an LLM makes its way into a browser extension or is repackaged as an ‘essay helper,’ it bypasses standard web categorization.”

This has forced them to make quick, sometimes disruptive policy adjustments. “Occasionally, we’ve had to adjust mid-exam week when content surfaced through a chatbot that contradicted the school’s filtering rules,” he concludes.

Three key LLM risks and actionable steps for MSPs

Daniel Gorlovetsky, CEO of TVLTech, highlights three key risk areas:

  • Data leakage: Employees inadvertently feed sensitive data into these tools without realizing they are storing or using it to improve the model.
  • Prompt injection and manipulation: Attackers craft inputs to hijack the model’s behavior, particularly in embedded systems like chatbots or automated workflows.
  • Over-trust in AI outputs: Teams accepting model responses as fact without validation, opening the door to social engineering and poor decision-making.

Gorlovetsky explains that MSPs can take several proactive steps to mitigate these risks:

  • Establish clear usage policies that specify which data employees can share with public LLMs.
  • Use self-hosted or API versions of models with strict data handling and no training retention.
  • Deploy prompt filtering and sanitization, and actively monitor usage patterns for potential abuse.
  • Educate teams—this tech may seem harmless, but it’s like giving every employee a megaphone connected to the internet.

“AI is a force multiplier for both good and bad. If you’re not building security into your LLM use now, you’re going to be playing catch-up later,” Gorlovetsky warns.

As the use of LLMs like ChatGPT, Claude, and DeepSeek continues to rise, so too does the potential for cybersecurity risks. From data leaks to AI-driven scams, the threat landscape is evolving rapidly, and managed service providers (MSPs) must remain vigilant to protect sensitive information.

wd_hustle id=”65″ type=”embedded”/]

Photo: somavarapumadhavi / Shutterstock

This post originally appeared on Smarter MSP.

Leave a Reply

Your email address will not be published. Required fields are marked *