ChatGPT – Already a Security Risk? 
C

Artificial intelligence (AI) is an absolute game changer, according to the vast majority of our readers in a recent survey. There are some thought leaders who are now pleading for more restraint in the way we approach AI. However, others are warning against the possible negative side effects of stagnating research at this point. Some are going as far to say we should strap in for the bumpy ride because we’re on the ride whether we like it or not. But that doesn’t change the reality right now – meaning secpros should be up to date with security risks that come from ChatGPT’s widespread accessibility. 

With that in mind, we wanted to bring some attention to the possible security concerns that we are already seeing with ChatGPT. Some of these are concerns about the widespread access to a tool which can autocreate malware – the potential “rise of the script kiddie”, if you will. Others are more concerned with issues related to ChatGPT’s security itself. Either way, these are all key problems with the magic tool on everyone’s lips right now. 

Is ChatGPT a security risk? 

For the most part, no. ChatGPT – which, for many people, will only mean the free chatbot built with GPT-3 – itself is only as dangerous as the person using it. There’s no especially clear motive right now for widespread bans on ChatGPT as a security risk. But it creates a few headaches for secpros who aren’t having to manage integrating it with their systems. 

ChatGPT and the problem of Large Language Models 

This criticism isn’t clearly a worry about ChatGPT 3, 3.5, or 4, but it is a concern with other, less scrupulous models and platforms as more and more people get in on the action. Some Large Language Models (LLMs) automatically save their inputs (i.e., all entered prompts) to their databases. Although requests for an Elvis Presley short biography or the “I Have a Dream” speech if Bugs Buggey had delivered it won’t give away a great deal of confidential information, there is a concern that some sensitive details could leak out in the long run. 

As these models become more popular, there is a concern that “fine tuning” and “prompt augmentation” data could be used in ways which would violate privacy and – potentially – copyright laws. If all our requests are added to a hidden LLM database, there is a concern that personally or organizationally sensitive data could be readily available to a particularly determined threat actor group. In reaction to this concern, the NCSC has advised not to enter sensitive information into public LLMs and only ask these models questions which wouldn’t cause issues if they were leaked. 

ChatGPT and Phishing 

Predictably, a bot which generates written content has ended up being useful to people phishing for sensitive information. Although we only have proof-of-concept displays at this point (or, at least, confirmed at this point), there is a clear use case for threat actors to use ChatGPT for mass-produced phishing email content. 

This ease of access to high-quality phishing emails is deeply concerning for a cybersecurity culture which struggles to really stamp in how important vigilant email opening policies are.  

ChatGPT and Malware 

Secpros are probably already aware that ChatGPT can be used to generate quick pieces of useful code. This can save valuable time generating your own code (or searching Stack Overflow when that fails…), but it comes with a concerning problem when in the wrong hands. 

Just like legitimate code generation is incredibly easy, illegitimate code generation (for example, building known malware) is also possible. This means that an individual without any particular hacking skills doesn’t even need to consult Github these days. Now, they can just type a prompt into a hacked version of ChatGPT and get everything they need within five minutes. 

Cyberark even went as far as to show that ChatGPT can create polymorphic malware, specifically designed to confuse cybersecurity experts and evade detection. Although you might expect the sophisticated chatbot to have built-in defences, it seems existing ones are easily avoidable if you know what you’re doing.

Stay up to date with the latest threats

Our newsletter is packed with analysis of trending threats and attacks, practical tutorials, hands-on labs, and actionable content. No spam. No jibber jabber.