As we get further into 2023, the capabilities of Artificial Intelligence continue to develop at a rapid pace. The month of July continued this trend with the emergence of what some are calling ‘ChatGPT’s evil twin’, a cyberattack on Microsoft users, and a solid look at some of the emerging cybersecurity scams. All of these critical topics and more will be covered in our July Cyber Recap.
ChatGPT’s Evil Twin
Companies like Google and OpenAI are beholden to basic, yet necessary guidelines to prevent Artificial Intelligence from being used to harm others. But what if there was software which wasn’t bound by these restrictions? Fortunately for hackers, there’s WormGPT. WormGPT is the first unethical AI chatbot, which was specifically designed to help bad actors generate malware. Additionally, WormGPT has been used to create templates for phishing emails, which have been described as “remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing and BEC (business email compromise) attacks.”
Some of the features included in WormGPT are unlimited character support, chat memory retention, and code formatting – all for around $75 a month on the dark web. The advent of WormGPT shows that hackers and cyber criminals not only utilize technology to attack your data but improve upon it. In order to protect your sensitive information from these threats, SWK recommends that you use enhanced email verification methods and security awareness training for employees, services which can be found here.
Emerging Cyber Scams
The advancement of technology can be seen as a double-edged sword, bringing both innovation and risk to your online security. AI, as previously seen, has been used for both purposes. Here’s a list of other scams AI has improved in recent months.
Deepfakes
- Deepfakes take advantage of AI to copy one’s appearance and motions to an unsettlingly accurate degree. These programs need only a few images of source material before they can create a fictitious video showing someone saying something completely and utterly fabricated. Interestingly enough, the most common use for deepfakes online is for explicit videos, making up 96% of all instances. However, this does not mean that in the future this technology won’t be used in other nefarious ways, as some financial scammers have been known to harness this technique.
AI Voice Cloning
- Similar to how a deepfake would replicate someone’s face and expressions, AI voice cloning specializes in replicating one’s manner of speaking. After uploading a length of voice samples, the AI can almost perfectly replicate the target, making the generated voice near identical to the subject matter. Scammers can utilize this technology by cloning a family members voice and portraying them in danger, with a dire need for money. Additionally, they could impersonate a high-ranking member of a company and facilitate unauthorized transfers.
ChatGPT Phishing Emails
- Phishing is a classic technique which has been used by hackers for years now to gain access to private information online. However, AI has been making it even easier to create a successful phishing campaign. In the past, a telltale sign of a fake email would be typos and loopholes in the story, but now chatbots like ChatGPT can correct any glaring errors in the content. Additionally, AI can help criminals pinpoint who the perfect victims would be, assessing what type of person would be the most likely to fall for their scam.
Microsoft Makes Amends
After an alleged Chinese cyber-attack on Microsoft users, the tech giant has decided to give out free cybersecurity tools to some government and commercial customers as a response to criticism. Government officials have recently complained that Microsoft had not done enough to protect its users, a claim which can be supported by the fact that one of the victims, a human rights organization, could not detect the security breach as they were not paying for premium software licenses. In order to prevent future attacks from occurring, a company must have access to the logs which show when and where the attack took place. Until recently, Microsoft has held these logs behind a paywall. While granting free tools to users is a good first step, many critics believe that this is all too little too late, as the damage has already been done.
Contact SWK Today
The capabilities of AI, and other tools being used to compromise your data, are constantly getting better by the day. Staying on top of these developments is critical to protecting your online security. Contact SWK today to learn more about how you can prepare yourself against the constant threat of hackers.