Cybercriminals Are Already Leveraging AI to Get to Your Data. What Can You Do About It Today?

JUNE 13TH, 2024

AI Drives Massive Increases In Types of Cybercrime 

Just this month, the Federal Bureau of Investigation (FBI) issued a new warning of “the increasing threat of cyber criminals us artificial intelligence (AI). With so much talk about AI, we thought it would be worthwhile to take a deeper look into the types of cybercrime where AI is being employed, 

AI-Driven Phishing

A recent Help Net Security article states, “AI-driven phishing attacks deceive even the most aware users.” Meanwhile, SlashNext’s The State of Phishing 2024 found a whopping 856 percent increase in malicious emails in the last 12 months. One of SlashNext’s findings says it all: there has been a 4151 percent increase in malicious emails since the launch of ChatGPT. However, phishing is just one of the many types of cybercrime that are on the rise.

That increase can be attributed to AI-driven cyber-crime automation, which creates and sends phishing emails by learning from massive datasets of prior emails. Cybercriminals using AI can convincingly mimic the style, look, and tone of an individual company’s communications, increasing their chances of deceiving the recipient. According to Check Point, Microsoft was the most imitated brand in phishing attacks in Q4 2023.

AI-Driven Password Guessing and Credential Stuffing 

AI-powered tools are also being used for password guessing and stuffing. For example, Hashcat is a fast, efficient, and versatile hacking tool that assists brute-force attacks by conducting them with hash values of passwords that the tool is guessing and applying. 

HYPR’s security encyclopedia says, “Breaches of complex passwords are on the rise as hackers use Hashcat to crack passwords using known hashes. This is next-level hacking that goes beyond the simple stuffing of credentials into username/password fields on web applications.”

AI-Driven Vulnerability Discovery

AI can scan and discover vulnerabilities in target systems ripe for exploitation. These vulnerabilities are analyzed and used to create attack sequences to overcome your defenses. Generative AI can also adapt to security systems and learn from the results of other attacks. By understanding patterns and anomalies that commonly cause security weaknesses, hackers can identify exploitable software flaws, sometimes before security teams are aware of them.

Several AI-driven systems employ sophisticated techniques to analyze software code and detect vulnerabilities.

Static Application Security Testing (SAST)

AI-driven SAST tools automatically scan a program’s source code, bytecode, or binary code without executing it. AI can identify security flaws such as buffer overflows, SQL injection points, and cross-site scripting vulnerabilities by analyzing the complete codebase. By leveraging ML algorithms from previous analyses, these tools improve their ability to detect complex vulnerabilities over time.

Dynamic Application Security Testing (DAST)

Unlike SAST, DAST tools analyze running applications to find vulnerabilities by interacting with the application, performing functions, and analyzing responses such as runtime exceptions and configurations that can be exploited.

Software Composition Analysis (SCA)

SCA tools employ AI to analyze an application's components to identify known vulnerabilities in third-party libraries and open-source components. AI can maintain and quickly update a database of known vulnerabilities, enabling it to quickly compare the components used in an application against this database and flag any vulnerabilities associated with outdated or vulnerable third-party code.

Anomaly Detection

ML learning models are trained on datasets of normal code activity, enabling AI to recognize patterns and deviations, apply these models, and flag anomalies in the code that may indicate hidden vulnerabilities. This allows hackers to identify zero-day vulnerabilities—those that have yet to be recorded in a security database—such as the Cybersecurity and Infrastructure Security Agency (CISA) Known Exploited Vulnerabilities Catalog.  

Evolving Evasive Techniques

Cybercriminals leverage AI-driven malware that uses machine learning (ML) algorithms to change its codebase dynamically. This is where prevention can be incredibly challenging regarding AI and cybercrime. Several types of malware involve evasion.

Polymorphic and Metamorphic Malware

Polymorphic and metamorphic malware can change their code as they propagate through a system. Polymorphic malware can change its underlying code without altering its primary functions each time it infects a new system by re-encrypting itself, which changes its binary pattern while keeping its payload behavior. That means its signature or hash value changes, so signature-based systems can’t recognize it. 

Metamorphic techniques employ an even higher level of sophistication. The malware rewrites its own code entirely before replicating. Changes include reordering instructions, using different registers, or replacing parts of the code with functionally equivalent ones. Put simply, the malware becomes a new version of itself, making detection by traditional means incredibly challenging.

Contextual Awareness

Some AI-enhanced malware leverages contextual awareness. This approach analyzes the surrounding environment and adapts its behavior based on factors like time, network conditions, user behavior, or security measures. This allows malware to remain dormant or execute attacks at opportune moments, evading detection. 

For example, AI-driven malware can analyze its environment to determine whether it runs in a virtual machine (VM) or sandbox environment. Based on the environment, the malware can stay dormant or display benign behavior to evade detection and analysis. Once it identifies that it’s operating on an actual target’s machine, it can activate and execute its malicious functions.

This same contextual awareness drives adaptive data exfiltration, which allows the malware to assess which data is most sensitive or valuable and adapt its data exfiltration tactics to hide from network-based anomaly detection systems.

Data Protection Against AI: Best Practices and Methodologies

Given that many types of cybercrime employ AI, it’s common sense that you’ll need a multilayered approach to data protection. Here are some steps you can take today to bolster your defenses.

Institute Ongoing Cybersecurity Training Programs

The 4151 percent increase in malicious emails since the launch of ChatGPT, which we noted at the opening of this post, should drive everyone responsible for data protection in their organization to ensure their internal cybersecurity program is comprehensive, effective, and validated

Understanding the Threat Landscape

Your training program should start by helping your people understand the threat landscape. That starts with basic cybersecurity awareness, including the types of cyber threats—malware, ransomware, phishing, social engineering, etc. The training should also explain how AI-driven threats are increasing, how they are being employed, and examples of recent attacks, such as the recent attacks by Iran and North Korea reported by Microsoft.

Recognizing and Responding to Phishing Attacks

Teach your people how to identify phishing emails and malicious websites. Training should cover examples of common phishing tactics, what to look for to verify whether or not an email or website is authentic and should be trusted, and what to do if they encounter a suspicious email, website, or attachment. Use simulated phishing attacks to train your employees and test the effectiveness of your program.

Best Practices for Data Security

Everyone must understand password hygiene. TechTarget offers its Top 6 password hygiene tips and best practices here. These include creating strong passwords, using password managers, and understanding the importance of using different passwords on each platform. You must also provide guidelines for how sensitive company information should be handled and shared, the importance of encryption, and secure data transmission methods.

Security Protocols and Compliance

Training must also cover your company’s specific policies regarding cybersecurity, including the use of company and personal devices, remote work security protocols, and compliance with relevant laws and regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). You must also provide clear instructions regarding reporting procedures: who to contact and how to proceed if a security breach, ransomware attack, or other cyber incident is suspected.

Cybersecurity Training: Benefits of Outsourcing

With IT teams already typically overburdened, combined with the time-intensive requirements of developing and executing an internal training program, it’s worth looking to an outside company for help. Cybercrime Magazine lists the top cybersecurity education and training programs here. These companies bring expertise and experience, an objective viewpoint, resources and tools, and the ability to customize courses for your specific requirements.

Employ AI-Driven Data Protection Technologies

As AI-driven attacks ramp up, it’s vital to implement AI-driven defenses. That’s why Arcserve partners with Sophos to include Intercept X Advanced for Server with its products. Intercept X detects both known and unknown attacks without relying on signatures by employing deep learning, AI, and control technology to stop attacks before they impact your systems. By stopping the techniques used through the attack chain, Intercept X for Server keeps your organization secure against file-less attacks and zero-day exploits.

The software includes anti-ransomware capabilities to detect and block the malicious encryption processes used in ransomware attacks. Encrypted files can be rolled back to a safe state, minimizing the impact on your business. Anti-exploit technology stops the techniques cybercriminals rely on to compromise devices, steal credentials, and distribute malware. 

Follow the 3-2-1-1 Strategy: Immutable Storage Is a Must

At Arcserve, we are strong advocates of the 3-2-1-1 backup strategy. This post details the strategy, but the last “1” is the most crucial component. It stands for immutable storage. Read this post for a deep dive into immutable storage, but essentially, it’s a way of storing your files so that even AI-driven attacks get to them (at least for now).

When your data is backed up to immutable storage, it’s saved in a write-once-read-many (WORM) format that unauthorized users can’t alter or delete. That gives you a last line of defense if your other prevention strategies fail.

Conclusion

AI will continue to evolve, creating new threats and vulnerabilities. By working with an Arcserve partner, you can access the latest expertise and experience in dealing with threats and implementing the proper data protections for your needs.

Find an Arcserve Technology Partner here.

You May Also Like