Contact us today.Phone: +1 888 282 0696Email: sales@aurorait.com

The Convergence of AI and Social Engineering

Introduction

Elon Musk’s recently released AI-generated deepfake video (1) on social networking platform X featuring world leaders attired in futuristic outfits has drawn attention in cybersecurity circles to the dangers of Artificial Intelligence (AI). Though Musk’s video is released to poke fun at celebrities, the inherent power of AI to cause immense devastation, to be misused, mislead, and misinform is not to be missed.

It has been close to a decade now that scammers have been harnessing its potential. Celebrities have not been the only ones at the receiving end of such scams. And with social engineering offering much scope for their malicious intentions, fun has been the last thing on the minds of the bad actors.

The perfect setting

To understand the convergence of AI and Social Engineering, it is worthwhile revisiting what Social Engineering is and how it offers the perfect attack vector for scams. Social Engineering gets its name from what it does – namely tricking individuals into revealing sensitive information about themselves or their organizations and then taking actions that will compromise their security or assets. It has its origins in the science of human behavior, psychology, and motivation. It is designed to psychologically manipulate victims’ emotions like greed, apprehension, fear, and curiosity into making them act irrationally and illogically. Kaspersky (2) calls it an attack that manipulates a user’s behavior, preying on inherent weaknesses in his or her persona.

With its versatile features and ability to render convincing impersonations, AI offers limitless possibilities to weaponize social engineering attacks and make them convincing and effective.

How it is happening

Manipulation is at the heart of social engineering. Bad actors aim to prey on the weaknesses in human nature and the tendency of users to respond to cues that they recognize. AI offers just the tools that can make these cues convincing. Here are some of the features that AI offers in this regard.

  • Versatile situational analysis and planning features that help launch attacks
  • Voluminous data collation and churning to yield prime candidates for social engineering attacks
  • Fast-responding chatbots that simulate human behavior and offer language-perfect responses via Natural Language Processors (NPLs) that increase credibility while concealing language limitations of the scammer
  • Audio and video features for voice and image cloning resulting in convincing misleading deepfakes (3) via audio messages (phishing) and videos (vishing)
  • Automation tools that facilitate industry-scale social engineering attacks via automated emails with malicious links
  • In-built learning tools that refine attacking strategies and suggest next-level social engineering paths

How effective it is

Statistics and studies show that AI-generated Social Engineering attacks are on the rise and are improving by the day. Harvard Business Review (4) says that as much as 60% of phishing attacks – one of the primary social engineering methods – are now AI-automated. What’s alarming is that the Large Language Models (LLMs) that power them are not just potentiating attacks – they are also simultaneously lowering the costs of generating these attacks. Studies show that ChatGPT-generated emails used in the phishing process enjoy an extremely high rate of success. Health-ISAC, the global organization for healthcare stakeholders sounds the alarm bells in their whitepaper (5) on the role of AI in the evolution of Social Engineering. They term it a tool of mass disinformation with transformative power – thanks to the widespread consumption of social media – to seriously impact mass behavior, discredit, sow discord, and alter public opinion.

As with other cyberattacks, social engineering scams result in financial losses, sensitive data compromise, operational disruptions, system outages, and reputational damage. Cyber experts are convinced that things are only likely to exacerbate in the near future, mandating preventive action be taken to address the issue.

Fighting the fire

User awareness, social media prudence, cyber hygiene, and training are highlighted by many experts as essential elements in the ongoing struggle to curb the negative impact being experienced by the use of AI. Training is especially highly recommended. User training studies indicate that significant benefit accrues from ongoing social engineering training. Forbes (6) cites a study that shows social engineering scams decreased drastically from 32.4% to 5% after employees received a year-long training session.

Training and employee awareness are likely to work well in tandem with other industry practices like phishing-resistant Multi-Factor Authentication (MFA) (7), the use of Behavioural Analytics (7), and the simulation of AI-driven Social Engineering attacks.

Yet the best way forward appears to be the use of AI itself. Though it very much is a disruptive technology, AI also offers scope for analyzing, detecting, and responding to even the most advanced forms of social engineering. Its arsenal of content-evaluation tools can detect phishing attempts, see through fake reviews, and discern deepfakes.

Final words

While the convergence of AI and Social Engineering continues to cause concern, the bigger picture of AI as a threat to cybersecurity has become moot. The initial brouhaha about the ‘greatest invention of the century’ has begun to die down, and the industry is already calling for the use of Responsible AI (9). Discretion is now the watchword. The knives look to be out. The Forbes article (10) citing the book ‘Power and Prediction – The Disruptive Economics of Artificial Intelligence’ by Joshua Gans, Ajay Agrawal, and Avi Goldfarb makes for interesting reading, as it sounds the call for organizations to look at the technology critically when implementing it.

In the final analysis, however, statistics may provide the best indicator as to how to attempt to quell the fire of AI-generated Social Engineering threats. It comes from technology training and security awareness firm Infosec Institute (8) – it cites a study that showed AI tools detected as much as 61% of customer reviews of electronic goods on a leading e-commerce platform as being fake and misleading.

A strange war is at hand. A war where both protagonist and antagonist are the same. One that sees AI being used to disarm AI!

References:

 


Contact us at sales@aurorait.com or call 888-282-0696 to learn more about how Aurora can help your organization with IT, consulting, compliance, assessments, managed services, or cybersecurity needs.

Recent Posts

The ABCs of White Hat Hacking

The term ‘Devil’s Advocate’ is well-known in the corporate world. It is said to have evolved from the Catholic Church’s procedure of appointing a lawyer

Read now