Generative AI is revolutionizing the way we create and interact with technology. From writing essays and coding, to generating music and artwork, generative AI systems like ChatGPT are leading the way in AI advancements. However, as the technology continues to grow and evolve, it’s becoming increasingly clear that the potential for its misuse is a growing concern. ChatGPT Banned from Scientific Papers by Leading Journals
Recent reports of deepfake technology being used to create pornographic content featuring the faces of women streamers have sparked outrage. And the controversial generative AI app, Lensa, faced backlash for allowing its system to create fully nude images of users and for altering the appearance of women of colour. These examples are just the tip of the iceberg, as generative AI can also be used in scams, cybercrime, and identity theft.
Cybercriminals are finding new ways to use generative AI to improve the frauds they perpetrate. The ability of these systems to find patterns in large amounts of data makes them a valuable tool for scammers. For example, generative AI can be used to impersonate important figures in voice spoofing attacks, create more believable scam messages, and even target vulnerable individuals more selectively. What is Hacking? And Different Types of Hacking and Hackers
Unfortunately, the laws and regulations surrounding generative AI are not yet equipped to handle its impact. While the US has had a National Artificial Intelligence Initiative in place since 2021, and the European Union is on its way to enacting the world’s first AI law, Australia and New Zealand have yet to take similar steps. This leaves these countries vulnerable to the potential dangers posed by generative AI.
To prevent the misuse of generative AI, it’s important for governments to work closely with the cybersecurity industry to regulate it without stifling innovation. Ethical considerations for AI programs should be made mandatory, and both countries should take advantage of the upcoming Privacy Act review and the New Zealand Privacy, Human Rights and Ethics Framework to get ahead of potential threats.
As a society, it’s also important to be cautious about what we see online and to remember that humans are often bad at detecting fraud. Spotting scams will become more difficult as criminals add generative AI tools to their arsenal, but understanding the ways in which these systems fall short can help us detect AI-based cybercrime. For example, generative AI is bad at critical reasoning and conveying emotions, and it can be tricked into giving wrong answers.
In conclusion, the rapid growth of generative AI presents both exciting opportunities and serious risks. It’s crucial for governments, businesses, and individuals to take proactive steps to prevent its misuse, and to remain aware of the dangers posed by this powerful technology.