From defense to deception: Generative AI’s role in cybersecurity & fraud

Read Time: 6 Min
The rapid growth of generative AI and machine learning has led to concerns about enhanced cyber and fraud threats. Here’s how organizations can help protect themselves.

By Amber Buening, Cybersecurity Outreach Director, Britney Kloepper, Fraud Protection Services Director, and Adam Zielachowski, Fraud Engineering Director, at Huntington Commercial Bank.

Key takeaways

  • While AI drives efficiency and can enhance organizational security, it also poses cybersecurity and fraud threats.
  • AI-enabled attacks, such as deepfakes and AI-driven social engineering, leverage advanced techniques and real-time adaptability.
  • Regular training on AI-enabled threats and strict adherence to operational procedures can help mitigate risk.
  • Implementing AI-enabled defensive systems can help enhance an organization’s ability to preempt and respond to cyber and fraud threats.

The rise of generative artificial intelligence (AI) and machine learning has unleashed a technological revolution, promising rapid-fire innovation and efficiencies for industries adopting it. However, these technologies can be a double-edged sword. Malicious actors are harnessing these same tools to craft highly effective fraud and cyber schemes that can elude detection, including sophisticated business email compromise (BEC) and social engineering.

As AI’s capabilities continue to expand, it is imperative for organizations to understand the risks, the opportunities, common tactics and tools used by malicious actors, and the preventative measures needed to combat these rising threats.

AI’s dual role as defender and adversary

The advent of generative AI platforms like ChatGPT, Jasper, and DALL-E has ushered in a new era of accessible AI and machine learning capabilities. Nearly everyone with an internet connection can harness the power of AI. Within just two months of its launch, ChatGPT was estimated to have reached 100 million monthly active users. These technologies have also empowered businesses to automate complex processes and enhance decision-making capabilities. A global survey found 65% of surveyed organizations regularly use generative AI in at least one business function today.

Cybersecurity and fraud are business functions that have benefited from this technology. Machine learning and AI modeling excel in anomaly detection and enhanced security measures. AI-enabled systems can scrutinize data and system patterns to spot potential threats, identify vulnerabilities, and assist defenders in reducing false positives to accelerate security defense time.

Yet, while AI is a powerful ally, it can also pose ominous new threats. Threat actors are using generative AI to enhance the sophistication, frequency, and efficacy of their attacks. Phishing emails have reportedly increased by 1,265% since the launch of ChatGPT§, which coincides with the rise in overall chatbot usage. These AI tools can also be leveraged to automate and scale attacks, making them more challenging to detect and counter.

The growing spectrum of AI-enabled threats

As the use of generative AI for malicious reasons evolves, so does the variety and complexity of the threats. It can help people gather impressive human-like responses, enable malware development, and even create web apps.

These types of attacks can often bypass traditional security measures to exploit vulnerabilities in new ways:

  • Audio deepfakes and voice cloning: AI-generated audio deepfakes can mimic a person’s voice with startling accuracy to deceive victims into believing they are speaking with a trusted individual.
  • Deepfake videos: Threat actors can use AI to generate fake but surprisingly realistic videos to impersonate individuals. The purpose of these videos is often to spread misinformation, deceive victims into performing financial actions, or manipulate public opinion.
  • AI chatbot phishing: Natural language processing (NLP) is a branch of AI that enables machines to ‘understand’ and respond to text or spoken words in much the same way that humans can. NLP combines rule-based modeling of human language with various models to help computers make sense of what they are processing. AI chatbots can be programmed to engage with victims in real time, crafting tailored phishing messages that appear legitimate. These malicious chatbots can respond to inquiries and adapt their tactics based on the victim’s response.

Within weeks of ChatGPT’s launch, one threat intelligence company found threat actors on the dark web were sharing how to use the platform to help develop malware, create authentic social engineering campaigns, and spread misinformation.

AI-enabled social engineering threats

When in the hands of malicious actors, AI-enabled tools can simplify BEC, phishing, smishing, vishing, quishing (QR-based scam), pig butchering (a type of scam wherein a victim is lured into giving or investing increasing sums of money to a fraudulent scheme), and other social engineering attacks. Cybercriminals can use generative AI to craft convincing messages without the typical phishing BEC red flags, such as misspellings or grammatical errors, which makes them harder to detect. Given the amount of damage these attacks can do – BEC losses amounted to more than $2.9 billion in 2023 – this has become a significant concern being reported to our teams across all businesses and organization types.

Threat actors can also use these tools to scale social engineering activities. AI-enabled tools can scour public records, social media, websites you frequent, and other sources to quickly compile detailed profiles of potential targets and build convincing personalized messages. For example, an AI-generated email might reference specific activities or professional relationships such as a prescription refill notice or recent event with a C-suite executive. These details increase the authenticity of phishing and smishing messages, making them more successful.

What you and your teams should look out for with AI threats

  1. Audio clues: Listen for choppy language, unusual pauses, lack of breathing, and strange sentence structures – all signs of voice cloning.
  2. Visual clues: Watch for unnatural movements, distorted proportions, lighting discrepancies, and inconsistencies that suggest the use of deepfake technology.
  3. BEC red flags: While AI can remove some indicators of these threats, classic BEC scam signs persist. A sense of urgency, unusual requests, or large fund transfers without verification should trigger alarm bells.

AI-enabled threat prevention best practices

Employee training on AI threats

  • Stress the importance of verifying fund transfers, account or address changes, or similar high-risk transactions through other communication channels, such as a known phone number.
  • Educate employees about emerging threats, including how to recognize potential AI-enabled attacks and respond to them.
  • Provide ongoing training to help employees identify and respond appropriately to BEC and phishing attempts. One survey found 34.3% of untrained users on average fail phishing tests, but that number is reduced by nearly 50% within 90 days following phishing security training.

Protect your online presence

  • Practice good personal and organizational password hygiene. Use strong passwords unique to each account and rotate them on a regular basis. A password manager can help with this.
  • Employ identity and access management (IAM) policies. Your organization’s policies should include multifactor authentication (MFA) for all users, single sign-on capability, and privileged access management.
  • Be cautious about sharing information online. Information from social media, business websites, or press releases could potentially be used against you, your employees, and your business.
  • Assess your website and social media to determine whether they share too much information. Be aware of any data you might have publicly available, especially unused employee profiles, email addresses, court records, or social media accounts.

Leverage AI and machine learning for security

  • AI-driven tools can conduct continuous security audits and identify software, network, device, and operating system vulnerabilities before attackers can exploit them.
  • Consider investing in AI-based threat detection systems to identify behavior and traffic pattern anomalies and provide early warnings of potential threats.
  • Carefully assess any new outside vendors when considering new AI systems, products, or services. In addition to following third-party risk management practices, consider asking about data retention policies, AI model validation, and AI model maintenance††.

Practice cybersecurity hygiene

  • Develop a strong data recovery and backup plan can minimize the damage of a cyberattack or breach.
  • Never click suspicious links or open unknown attachments.
  • Regularly communicate to employees about common threats, such as phishing scams, and best practices for protecting against them.

Understand the threat and opportunities of generative AI

The duality of AI as both a cybersecurity asset and a threat actor’s weapon demands a proactive defense strategy. Emphasizing continuous learning and adaptation can help foster a strong security culture and build resiliency in the face of cybersecurity and fraud threats.

Huntington can support you with the insights, resources, and expertise needed to help you develop a strong cybersecurity and fraud prevention strategy. Explore our cybersecurity and fraud resources, then contact us to learn how we can help you protect your employees and your business.

Financial & industry insights delivered to your inbox.

Sign up to receive emails about our latest articles, case studies, and events on topics that matter most to your business.
Subscribe

Related Content

Hu, Krysten. 2023. “ChatGPT Sets Record for Fastest-Growing User Base – Analyst Note.” Reuters, February 2023. Accessed August 5, 2024.  

McKinsey. May 2024. “The State of AI in Early 2024: Gen AI Adoption Spikes and Starts to Generate Value.” Accessed August 5, 2024.  

§ SlashNext. 2024. “The State of Phishing 2023.” Accessed August 5, 2024.  

≠ Cloudflare. 2024. “The Phishing Implications of AI Chatbots.” Accessed August 5, 2024.  

FBI Internet Crime Complaint Center. 2024. “Federal Bureau of Investigation Internet Crime Report 2023.” Accessed August 5, 2024.  

KnowBe4. 2024. “Phishing by Industry Benchmarking Report 2024.” Accessed August 5, 2024.  

†† U.S. Department of the Treasury. March 2024. “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” Accessed August 5, 2024.

The information provided in this document is intended solely for general informational purposes and is provided with the understanding that neither Huntington, its affiliates nor any other party is engaging in rendering tax, financial, legal, technical or other professional advice or services or endorsing any third-party product or service. Any use of this information should be done only in consultation with a qualified and licensed professional who can take into account all relevant factors and desired outcomes in the context of the facts surrounding your particular circumstances. The information in this document was developed with reasonable care and attention. However, it is possible that some of the information is incomplete, incorrect, or inapplicable to particular circumstances or conditions. NEITHER HUNTINGTON NOR ITS AFFILIATES SHALL BE LIABLE FOR ANY DAMAGES, LOSSES, COSTS OR EXPENSES (DIRECT, CONSEQUENTIAL, SPECIAL, INDIRECT OR OTHERWISE) RESULTING FROM USING, RELYING ON OR ACTING UPON INFORMATION IN THIS DOCUMENT OR THIRD-PARTY RESOURCES IDENTIFIED IN THIS DOCUMENT EVEN IF HUNTINGTON AND/OR ITS AFFILIATES HAVE BEEN ADVISED OF OR FORESEEN THE POSSIBILITY OF SUCH DAMAGES, LOSSES, COSTS OR EXPENSES.

Lending and leasing products and services, as well as certain other banking products and services, may require credit application approval.

Third-party product, service and business names are trademarks/service marks of their respective owners.