Future of Work News Free eNews Subscription

New Study Reveals the Role Large Language Models Play in Phishing Attacks

By

While phishing (in several forms) has taken place for decades, this type of fraud tends to evolve with technology. One of the most prominent phishing scams today involves “vishing,” in which supposed links to voicemail messages con victims into revealing their credentials for secure email gateways, software or web sites.   

According to a new report by cybersecurity company Egress, missed voice messages accounted for 18% of phishing attacks today, making them the most phished topic of the year so far. The report’s findings demonstrate the evolving attack methodologies used by cybercriminals that are designed to get through traditional perimeter security including secure email gateways. The study, entitled, “Phishing Threat Trends Report,” delves into key phishing trends, including the most phished topic, explores prevalent obfuscation techniques being used to bypass perimeter defenses, and examines whether chatbots have really revolutionized cyberattacks.

All phishing threat data and examples contained within the report were taken from Egress Defend, an Integrated Cloud Email Security solution that uses intelligent technology to detect and defend against the most sophisticated phishing attacks.

The report also highlights the role that large language models (LLMs) have played in enabling certain types of phishing attacks.

“Without a doubt, chatbots or large language models (LLMs) lower the barrier for entry to cybercrime, making it possible to create well-written phishing campaigns and generate malware that less capable coders could not produce alone,” said Jack Chapman, VP of Threat Intelligence for Egress.

One of the most concerning (but least-talked-about) applications of LLMs is reconnaissance for highly targeted attacks, according to Egress. Within seconds, a chatbot can scrape the internet for open-source information about a chosen target that can be leveraged as a pretext for social engineering campaigns, which are growing increasingly common.

“I’m often asked if LLM really changes the game, but ultimately it comes down to the defense you have in place,” noted Chapman. “If you’re relying on traditional perimeter detection that uses signature-based and reputation-based detection, then you urgently need to evaluate integrated cloud email security solutions that don’t rely on definition libraries and domain checks to determine whether an email is legitimate or not.”




Edited by Alex Passett
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Future of Work Contributor

SHARE THIS ARTICLE

Related Articles

LatticeFlow Covers Blind Spots with Intelligent Technology

By: Greg Tavarez    12/8/2023

Intelligent Workflows helps machine learning (ML) engineers proactively rectify errors and ensure the reliability and robustness of AI model performan…

READ MORE

AWS Introduces Amazon Q Generative AI-Powered Assistant for Business

By: Tracey E. Schelmetic    12/8/2023

Amazon Web Services, Inc. (AWS) recently launched Amazon Q, a generative-AI powered assistant that is specifically tailored for business operations.

READ MORE

Lacework's AI Dynamo: Transforming Cloud Security with Innovative Assistance

By: Greg Tavarez    12/8/2023

Lacework announced a generative AI assistant that gives enterprise customers a new way to engage with the Lacework platform by providing customized co…

READ MORE

The Future Speaks: Voicify and Chowly Elevate Restaurant Experiences with Voice AI Ordering

By: Greg Tavarez    12/7/2023

Voicify integrated with Chowly, an all-in-one digital ordering platform, to make its technology available to any of Chowly's customers.

READ MORE

Go from Generative AI Idea to Product Faster with Verta GenAI Workbench

By: Greg Tavarez    12/5/2023

Verta launched the Verta GenAI Workbench, an all-in-one platform to accelerate the GenAI builder's journey from idea to product.

READ MORE