News

Cybersecurity researchers found it's easier than you'd think to get around the safety features preventing ChatGPT and other LLM chatbots from writing malware — you just have to play a game of ...
Hackers have infiltrated a tool your software development teams may be using to write code. Not a comfortable place to be. There’s only one problem. How did your generative AI chatbot team-members ...
The release of two malicious language models — WormGPT and FraudGPT — demonstrate attackers' evolving capability to harness language models for criminal activities. Bad actors, unconfined by ethical ...
Generative AI presents many opportunities for businesses to improve operations and reduce costs. On the bright side, there is great potential for this form of AI to deliver value to organizations.
A leading security researcher has suggested Microsoft’s core Windows and application development programming teams have been infiltrated by covert programmer/operatives from U.S. intelligence agencies ...
Cybersecurity researchers were able to bypass security features on ChatGPT by roleplaying with it. By getting the LLM to pretend it was a coding superhero, they got it to write password-stealing ...