Threats posed by cybercriminals constantly evolve and increase in complexity as they gain access to more tools and resources to carry out illegal activities. They’ve also adapted with techniques such as social engineering, where they use psychological manipulation to trick individuals into divulging sensitive information or clicking on malicious links.
Individuals and organizations are forced to stay vigilant and take steps to protect their digital assets, but it is challenging when cybercriminals become increasingly sophisticated.
The numbers are out there to portray how active cybercriminals are. For example, a recent Norton report (known for cyber safety) reveals that Norton blocked 3.5 billion threats in 2022. Among those threats were 260 million blocked file-based malware and 90 million blocked phishing attempts.
But the report digs a bit deeper. It found out that scammers have been upping their game with new AI tools, specifically ChatGPT. ChatGPT is a tool used to create stories, answer questions, and provide advice based on human prompt inputs. But just as it’s a tool for people using it ethically, it has also become a tool for cybercriminals.
Cybercriminals are using ChatGPT to generate malicious threats through its ability to generate human-like text that adapts to different languages and audiences. Because of that, cybercriminals can easily craft email or social media phishing lures that are even more convincing, making it increasingly difficult to tell what's legitimate and what's a threat.
But that's not all; ChatGPT can also generate code. Just as ChatGPT makes developers' lives easier with its ability to write and translate source code, it can also make cybercriminals' lives easier by making scams faster to create and more difficult to detect.
Norton experts also say that bad actors use ChatGPT to create deepfake chatbots, basically chatbots that impersonate humans or legitimate sources like a bank or government entity. The purpose is to manipulate victims into turning over their personal information.
"We're seeing that ChatGPT can be used to quickly and easily create convincing threats," said Kevin Roundy, Senior Technical Director of Norton. "It's getting harder than ever for people to spot scams on their own, which is why Cyber Safety solutions that look at more aspects of our digital lives – from our mobile devices to our online identity, and the wellbeing of those around us – are needed to keep us safe in our evolving world."
It is advised to avoid chatbots that do not appear on a company’s website or app while still being cautious of providing personal info to any chatbot, think before clicking on links in response to unsolicited phone calls or emails or messages and update security solutions to have a full set of security layers.
In a time when cybercriminals go to any means to gain an edge over individuals and organizations, it is important to remain vigilant to protect digital assets.
Future of Work Contributor
Using CX Cloud from Genesys and Salesforce allows companies to enhance customer personalization while relieving the IT and analyst teams of developmen…
MiaRec's Auto QA empowers businesses to extract maximum value from their customer interactions.
A new Future of Work study from Omdia concludes that, while working in person has its value, a vast percentage of companies' employees favor hybridity…
Digital services and consulting company Infosys recently announced a partnership with NVIDIA.
Egnyte strengthened its relationship with Microsoft to provide customers with additional real-time document collaboration and sharing features through…