Fighting Back
How businesses combat AI-based ransomware attacks
In today's ever-evolving digital landscape, businesses are facing increasingly advanced cyberattacks. One report estimates that ransomware exposure costs the world US$57 billion annually.
Thanks to AI, attacks are becoming more frequent, targeted and sophisticated. However, businesses can also leverage AI to advance their cybersecurity measures and ensure adequate protection against these increasingly advanced attacks.
AI-generated phishing & social engineering
Attackers are using large language models to create realistic and convincing phishing emails. Illicit AI tools, like Worm GPT or Fraud GPT, have also emerged to provide cybercriminals with advice on constructing convincing emails.
Traditional advice for identifying phishing emails may become redundant as communication becomes more convincing thanks to AI. Typically, syntax, spelling and grammatical errors are a straightforward way of recognizing that someone may not be who they claim to be. However, cybercriminals are now using AI to mimic the tone and style of colleagues or business professionals.
Advancements in AI-generated voices and deepfakes also make it difficult for people to believe what they see and hear. In one incident, a finance worker paid more than US$25 million to cybercriminals who used deepfake technology to pose as his colleagues.
AI algorithms can scrutinize website traffic and user behavior, enabling the identification of unusual patterns and anomalies. One key advantage of employing AI for threat detection lies in its ability to quickly process vast amounts of data. Traditional manual methods, where security analysts manually sift through data, are time-consuming and inefficient.
By analyzing data from various sources, such as server logs, network traffic and user activity logs, AI can scan website traffic and user behavior to identify unusual patterns, such as if a user logs in from an atypical location.
Additionally, it is essential that businesses update their phishing training procedures to ensure that employees are aware of the rapid advancements of AI-generated phishing.
Malware mutation through AI
Cybercriminals use AI to automatically generate endless malware code variations, evading traditional antivirus software. This ability to constantly evolve allows polymorphic malware to evade signature-based detection methods, which typically rely on known patterns or signatures. One report found that AI could generate 10,000 malware variants, making detection highly difficult. Polymorphic ransomware will modify encryption algorithms to bypass traditional security measures and successfully encrypt a victim’s data.
In addition to this, sandboxing tools should be used to run suspicious files in isolated environments. AI can be used to test untrusted code and analyze malware to understand how it works and what it has targeted.
Smart targeting & timing
Cybercriminals utilize AI to scrape the dark web and social media to identify vulnerable and high-value targets. This allows cybercriminals to rapidly scale up their operations and target a broad range of businesses and organizations without requiring manual leg work.
One report revealed that automated scanning activity has seen a dramatic increase, with up to 36,000 scans per second recorded globally. This includes finding leaked credentials from previous breaches, identifying those who may be most susceptible to social engineering and determining which businesses may be using outdated systems. This AI-powered approach will enable them to automate their attacks and scale their operations with minimal effort.
On the defensive side, businesses can use AI risk scoring to identify and patch vulnerabilities before cybercriminals find them. These tools can analyze security logs and access patterns to flag areas of concern before a breach can occur.
Continuous AI monitoring can also be used to detect anomalies like login attempts outside traditional working hours or from unusual locations and auto-isolate systems in the event of an attack.
AI-powered ransom automation
Ransomware-as-a-Service (RaaS) is becoming a popular business model in which cybercriminals sell ransomware code or malware to other hackers to enable them to launch their own cyberattacks. AI now takes this a step further by allowing cybercriminals to automate various stages of their attack, from initial targeting to demanding ransom and using chatbots to negotiate. These chatbots could make demands based on the type of business that has been targeted and the sensitivity of the data.
However, AI can also be used to simulate and rehearse attack scenarios so that cybersecurity professionals are well equipped to deal with these types of attacks.
Additionally, AI tools can be used to respond to incidents and contain attacks. AI systems can identify abnormal encryption behavior and isolate affected systems before data can be fully encrypted.
AI-driven privilege escalation
Cybercriminals are using AI to identify weak credentials and escalate privileges. This enables them to access resources or perform actions that require higher levels of permission.
Once cybercriminals have managed to access higher privileges, they are able to move across networks, disable security systems and implement ransomware across a much wider area and target the most valuable resources and data. In one report, cybersecurity researchers revealed security flaws in Google’s Vertex machine learning platform that enabled them to escalate privileges and gain unauthorized access to a range of data.
Using identity threat detection and response (ITDR), businesses can have AI monitor how accounts typically behave and flag any unusual privilege escalations. Moreover, implementing a zero-trust architecture ensures that no user, system or device is automatically trusted. Every access request is verified, and permissions are granted on a least-privilege basis, meaning users can only access what is necessary for their role — limiting the potential damage if credentials are compromised.
The human element
While AI offers powerful tools to defend against ransomware, people remain one of the most significant weak points in any organization’s security posture. Cybercriminals know this, which is why social engineering attacks, whether delivered via email, messaging apps or phone calls, are often the initial stage of a ransomware campaign. AI has made these attacks more convincing than ever, but it is still human error, such as clicking a malicious link or trusting a fraudulent voice call, that opens the door.
AI-driven simulations can be used to create highly realistic phishing “war games,” exposing employees to the same kinds of tactics attackers are using in the wild. These exercises can be tailored to mimic the style and tone of internal communications or simulate deepfake audio calls from senior staff. By repeatedly training staff in environments that feel authentic, businesses can significantly increase resilience and reduce the likelihood of a real-world breach.
Moreover, pairing this training with behavioral analytics ensures that when mistakes do occur, they are detected quickly and contained. For example, if an employee inadvertently clicks a phishing link, AI monitoring tools can flag unusual activity from their account and isolate it before the ransomware spreads.
In short, while technology is essential, businesses must foster a security-first culture in which every employee understands their role in defense. AI may be the weapon of choice for both attackers and defenders, but people remain the decisive factor in who ultimately prevails.
Daniel Pearson is the CEO of KnownHost, a leading managed web-hosting service provider. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. With a relentless pursuit of innovation, he continues to shape the hosting industry and champion the open-source ecosystem.
Read more on Risk Management and Technology or related topics Cybersecurity , Occupant Safety and Information Technology
Explore All FMJ Topics