From VALL-E to DeepFakes: A new dimension in Cybercrime

Are these services here to help to steal your identity? 🤯

In today’s digital age, risks are everywhere. With new technologies emerging every day, it’s easy to forget that even those designed to help us can also bring harm if used for malicious purposes.

We’ve previously discussed the potential security risks of ChatGPT and other language models, but now we’re going to take a closer look at other models such as now well-known video- and voice-generating GANs, better known for the final product name: DeepFakes, and the newly-released Microsoft’s text-to-speech model called VALL-E.

As Artificial Intelligence (AI) generated images and speech become more realistic and sophisticated, it’s important to understand the potential dangers they pose. DALL-E, Midjourney, GANs, producing DeepFakes, and VALL-E can all be all used to impersonate individuals or organisations, spread misinformation, with the final goal of gaining access to sensitive information or getting funds.

In this article, we will explore the potential security risks of content-generating AI models, and discuss ways to protect yourself and your organisation from these risks.

DALL-E generated image for this article

Understanding the Risks of VALL-E

VALL-E is a text-to-speech model that can synthesize personalized speech while maintaining the emotional tone of the speaker prompt. This technology has the potential to change the way we interact with virtual assistants, chatbots, and other AI-powered systems. However, along with its potential benefits, it also poses certain cyber security risks that both individuals and companies should be aware of.

One of the main risks associated with VALL-E is its ability to impersonate individuals. The model’s ability to mimic the speech patterns of an individual could be used to create convincing phishing or social engineering attacks, making it more difficult to detect and prevent malicious activity.

For example, an attacker could use VALL-E to impersonate a CEO of a company and urgently request sensitive information from an employee. Under pressure of urgency and authority of the CEO we tend to trust such content more.

Another example is that, this model could be used to generate voice messages of family members, claiming they are in detention and need a ransom to be paid. This can be a particularly traumatic experience for individuals who are far from their family and unable to verify their safety.

The Old New DeepFakes

Another potential nightmare scenario is the use of VALL-E in deepfake technology. Deepfakes are videos that use AI to superimpose one person’s face or speech onto another person’s body or audio. This technology is already being used to create convincing fake videos of politicians, celebrities, and other public figures. When combined with VALL-E, the ability to mimic the speech patterns and emotional tone of an individual, deepfakes could be used to launch phishing or social engineering attacks, making it more difficult for individuals and companies to detect and prevent malicious activity.

One high-profile example of this is the use of deepfakes in cryptocurrency scams, where a deepfake video of a well-known figure, such as Elon Musk, is used to promote a fake investment opportunity:

We can clearly hear the voice of Musk being generated on the video above, due to bad sound quality as well as weird sentencing and intonations(?). Well, with VALL-E it will be much more realistic. This type of fraud is likely to become more prevalent as AI-generated images and speech become hard to detect, thus very convincing.

Protecting Yourself and Your Organisation

To protect yourself from these potential types of risks, it’s important to take a few simple steps.

  • First, be skeptical of any image, video or speech that seems too good to be true, as well as too bad, too urgent or too scary.
  • Always fact-check and verify the source of any image or speech before sharing it or taking any action based on it. Better safe than sorry.
  • Use tools like reverse image search and metadata analysis to verify the authenticity of an image or video. These tools can be used to detect if an image or video has been manipulated or if it is a deepfake.
  • Mind who you share your personal images, voice memos and videos with. The very same files may be well used against you or your company.
  • Use two-factor authentication, password managers, and Kaduu.io solution to protect your accounts and personal information. This will help to prevent unauthorised access to your accounts, which can be used to steal sensitive information or launch phishing or social engineering attacks.

Here in Kaduu we believe that AI models are powerful technologies that revolutionise the way we work, but they also pose certain cyber security risks. By taking steps to protect yourself and your organisation, you can help to mitigate them and enjoy the benefits of new technologies.

If you liked this article, we advise you to read our previous article about ChatGPT security risks. Follow us on Twitter and LinkedIn for more content.

Stay up to date with exposed information online. Kaduu with its cyber threat intelligence service offers an affordable insight into the darknet, social media and deep web. 

Comments are closed.