Why ChatGPT poses new cyber security threats and what to do with it?
The ChatGPT model developed by OpenAI has garnered a lot of attention in recent months. Many of us have tried using it and even integrated it into our daily lives. For students, it has become a go-to source for homework solutions, but on the other hand, it has also become a nightmare for copywriters or tutors.
For some, it has replaced Google and Wikipedia, while others have yet to discover it. In any case, there is no doubt that this text model has opened new horizons for humanity and is revolutionizing the way we work and learn. In this article, Kamila Tukhvatullina, Senior Cybersecurity Analyst at Kaduu, shares her opinion on how this new technology might be used by hackers and how to protect yourself and your company.
Do new tools bring new threats?
Just as the internet has enabled unlimited possibilities for individuals, it has also become a breeding ground for cyber crimes. Despite its many advantages, ChatGPT also poses security risks if it is used for malicious purposes.
We have tested ChatGPT for various tasks that threat actors might use it for. Here are some potential threats that it can bring, which we will discuss in detail:
For Individuals:
- The first, and quite obvious, threat is that ChatGPT, as a text model, can generate phishing emails that look shockingly legitimate and natural. You will not detect any grammatical errors in such emails as it has been the case several years ago.
For example, in this screenshot we have asked ChatGPT to generate an email on behalf of Elon Musk asking for funds:
And while there’s not much of “Elon Musk” here, the email stays natural and convincing
- Cyber criminals may now write mobile apps and integrate malicious code inside them faster than ever. ChatGPT has proven to be able writing a working (!) calculator app for Android in less than a minute, for example:
Unfortunately, it will not be difficult to integrate code that steals personal data into these apps.
- Finally, do you remember the “Tinder Swindler”? Well, some men are already using the new technology to generate pick-up lines or perfect messages based on basic information about their victims. The same goes for women, of course. While it may seem like fun and a benign activity, it is also a way of impersonating someone else for scam schemes and fraud.
For Companies:
- Again, cybercriminals can generate phishing emails using ChatGPT in a flash. Take a look at this example:
In one possible scenario, cybercriminals will lead the victim to download malicious .Rar / .ISO or .LNK files in order to further compromise the entire network.
- Cybercriminals may also translate their phishing emails into any language, and the human eye will hardly detect any errors. Here is a ChatGPT-rewritten email in English, given the original:
And here is a translation to French:
While translation may not be a new tactic, it remains a tool that cybercriminals might use.
- The AI models are good at generating website content. It might be used to clone pages or make them look more “authentic”.
- Finally, and worst of all, ChatGPT can be used to generate malware code.
Wait, what? Yes, indeed, ChatGPT is trained to be on the “good side”:
However, ChatGPT’s “knowledge” is limited to events before 2021. This means that any new viruses written in 2022 or 2023 (who knows!) will not be detected by the system, unless they use parts of code that were written earlier. This means that cybercriminals may potentially use ChatGPT to improve existing viruses or generate additional pieces of code.
- Cybercriminals might also modify existing viruses by obfuscating them with ChatGPT. The obfuscation works pretty well, as you can see in this example:
- They might also try “translating” known virus samples into other programming languages:
Indeed, while it is more often than not the case that such code does not work and throws errors here and there, it might still be used by hackers.
- The entry level to shady business has just become lower with ChatGPT. Hackers do not need to have extensive programming skills anymore, they just need to know how to use ChatGPT and how to “bend” its results in their favor. Here is a short script written by ChatGPT detecting XSS and SQL injections:
So, how to protect yourself?
The first thing that comes to mind is to try and stop the spread of ChatGPT. However, this is not a realistic solution, as the genie is already out of the bottle. Instead, we need to adapt and find ways to protect ourselves and our companies.
For individuals:
- Be cautious when you receive emails, especially if they are asking for personal information or funds. Do not click on links or download attachments from unknown sources. Use anti-phishing tools and educate yourself about phishing tactics. For example, make it a habit checking the sender’s domain. As a rule of thumb, read it from right to left – this way you can see the main domain first.
- Be careful about the apps you download, especially from third-party app stores or websites. Check the reviews and do some research before installing an app.
- Be aware of the information you share online, especially on dating sites or social media. Do not share sensitive personal information and be wary of anyone who asks for it.
For companies:
- As a company, you can also implement the use of ChatGPT among your developers to verify your code for vulnerabilities and ask the AI for best practices.
- Use email filtering and anti-phishing tools to protect against malicious emails. Train your employees to recognize and report phishing attempts.
- Implement multi-factor authentication for your accounts to add an extra layer of security.
- Keep your systems and software up to date with the latest security patches.
- Use a firewall and antivirus to protect your network and devices.
- Consider subscribing to Kaduu.io solution that alerts you whenever your employee or partner data is leaked online. This way, you will always be one step ahead of the hackers.
Out of curiosity, we asked ChatGPT itself how it might be used by cybercriminals, and here is its response:
If you liked this article, we advise you to read our previous article about LastPass security breach. Follow us on Twitter and LinkedIn for more content.
Stay up to date with exposed information online. Kaduu with its cyber threat intelligence service offers an affordable insight into the darknet, social media and deep web.