AI and Social Engineering

In a week where everything seems to be focused on AI and the adversary, we thought “why not take a look at how threat actors are leveraging changes in the technological landscape to trick us all?” – and we were pleasantly disturbed to find out that this process is becoming more common.

What are some common social engineering tactics? 

Firstly, there are six major ways in which AI is assisting with social engineering:

  1. Automatic phishing attacks
  2. Improved impersonation
  3. Automated targeting
  4. AI-assisted behavioral analysis
  5. Leveraged bots and chatbots

We won’t explore all of them here, but here is a quick overview of what to look out for:

  • AI tools can be used to generate highly realistic phishing emails or messages. Natural language processing (NLP) capabilities can help attackers create messages that closely mimic legitimate communication, making it more challenging for users to discern between genuine and malicious content.  
  • AI can be employed to mimic the voice or writing style of a trusted individual. Deepfake technology, for example, can create convincing audio or video recordings of someone, making it difficult for people to identify fraudulent communications. Other sensitive information can be gleaned from sophisticated social engineering attacks that can be used to create “fake content” about a target. 
  • Machine learning algorithms can analyze large datasets to identify potential targets for social engineering attacks. This enables attackers to tailor their tactics to specific individuals or groups, making the attacks more personalized and convincing. This improved target selection leads to much more effective social engineering scams. 
  • AI can be used to analyze and predict human behavior, allowing attackers to better understand their targets. This information can then be exploited to craft more persuasive social engineering messages. This leads to the more effective application of social engineering tactics from malicious actors. 
  • AI-driven bots or chatbots can be employed to engage with individuals on social media or messaging platforms. These automated interactions can be used to gather information, build trust, and ultimately manipulate users into taking certain actions. 

Where have we seen social engineering attacks in real life? 

Obviously, all this theoretical knowledge doesn’t help you a great deal. We need to know how threat actors apply social engineering tactics in real life. Otherwise, we might not recognize them until it is too late. You can have the best security tools in the game, but it is doesn’t help much when you leave the front door of your security defenses open to the adversary! Here are some examples of social engineering in the world, but remember that the evolving landscape cyber threats is always changing – be on your toes! 

Social engineers use deep fake technology 

In 2021, there were reports of a fraudster using deepfake technology to impersonate the voice of a company executive. The attacker called a subsidiary company and successfully convinced an employee to transfer a significant amount of money. The use of deepfake voice technology made the scam more convincing. 

Phishing messages with AI-empowerment 

Throughout 2021, there were multiple instances of AI-enhanced phishing attacks. These attacks used artificial intelligence and machine learning to analyze email patterns and craft phishing emails that closely mimicked legitimate communication. Some of these campaigns were highly targeted, focusing on specific individuals within organizations. As the widespread adoption of AI continues, we both have seen and no doubt will see this problem develop. 

A phishing email alone, however, is still a relatively simple risk with a low success rate for a security team to deal with – granted they can identify them. Generative AI doesn’t just hand the tools to the adversary, but also gives them the keys to the kingdom.

Business Email Compromise (BEC) with AI 

In 2020, researchers identified a case where attackers used AI to impersonate the CEO of a UK-based energy firm in a voice-mimicking deepfake. The attackers called the company’s German subsidiary and requested a fraudulent transfer of €220,000. The convincing nature of the voice deepfake contributed to the success of the attack, leading to a worrying undermining of the security we place in trusted individuals around us. 

Not only do companies have to deal with the threat of open warfare launched against them, but AI tools seem to provide the means for hackers to easily impersonate trusted businesses and individuals. And when that can happen, who is really to blame for a lack of trust on the part of the victims? 

Social Media Platforms, Bots, and Manipulation 

Instances of social media manipulation using automated bots have been reported regularly. While not always directly linked to social engineering attacks, the use of bots to spread misinformation, influence public opinion, and gather intelligence showcases the scale and potential impact of AI in manipulating online platforms. 

Threat detection is much more difficult with social media platforms as the target is generally less protected. A successful attack, however, in this space can still lead to vast amounts of sensitive information being leaked out from organizations. That’s why the risks of social engineering is worth noting to everyone with your company. 

Stay up to date with the latest threats

Our newsletter is packed with analysis of trending threats and attacks, practical tutorials, hands-on labs, and actionable content. No spam. No jibber jabber.