23-8-2023 (CALIFORNIA) Netenrich, a cybersecurity research firm, has uncovered a concerning development in the world of artificial intelligence (AI). A new AI tool called “FraudGPT” has emerged, designed specifically for malicious activities such as spear phishing, developing cracking tools, and carding. This tool has found its way onto the dark web and is available for purchase on various marketplaces and the popular messaging app Telegram.
Similar to ChatGPT, FraudGPT possesses the ability to generate content, but with a sinister twist. Netenrich’s threat research team first encountered FraudGPT being advertised in July 2023. A key selling point of this tool is its ability to bypass the safeguards and restrictions that prevent ChatGPT from responding to questionable queries.
According to the information provided, FraudGPT undergoes regular updates and utilizes multiple types of artificial intelligence. Users can obtain FraudGPT through a subscription model, with monthly subscriptions priced at $200 and annual memberships at $1,700.
Netenrich’s team decided to investigate FraudGPT by purchasing and testing the tool. Its interface closely resembles that of ChatGPT, with a left sidebar displaying the user’s request history and a chat window occupying the main screen. Users simply input their queries and hit “Enter” to receive a response.
During their tests, the researchers used FraudGPT to craft a phishing email related to a bank. Minimal input was required, with just the bank’s name included in the query format. FraudGPT not only completed the task but also suggested where a malicious link could be inserted in the text. It demonstrated the capability to create scam landing sites that actively deceive visitors into divulging personal information.
Furthermore, FraudGPT was prompted to identify frequently visited or exploited online resources, which could be valuable for hackers planning future attacks. An online advertisement for the tool even claimed that it could generate undetectable malware and search for vulnerabilities and targets.
The investigation by Netenrich also revealed that the individual behind FraudGPT had previously advertised hacking services for hire. Additionally, they were associated with another program called WormGPT, which shares similarities with FraudGPT.
The emergence of FraudGPT highlights the importance of vigilance in the face of evolving cyber threats. While it remains unclear if hackers have already utilized these technologies to create new dangers, tools like FraudGPT can potentially save them time in crafting phishing emails and landing pages within seconds.
It is crucial for consumers to remain cautious and skeptical of requests for personal information, while adhering to cybersecurity best practices. Cybersecurity professionals should ensure their threat-detection tools are up to date, as malicious actors may employ programs like FraudGPT to directly target and infiltrate critical computer networks.
FraudGPT’s analysis serves as a poignant reminder that hackers continually adapt their methods. However, it is also a reminder that open-source software can have security vulnerabilities. Both internet users and those responsible for securing online infrastructures must stay informed about emerging technologies and the associated risks, even when using seemingly benign programs like ChatGPT.