New research is raising concerns about ChatGPT telling people how to get away with serious crimes. Norwegian research group Strise told CNN that it found workarounds to get the AI chatbot to offer tips on things like how to launder money across borders and evade Russian sanctions, which included avoiding bans on weapons sales.
Further adding to worries, a report published by Wired in September revealed a way to “jailbreak” ChatGPT and get it to offer instruction on how to make a bomb.
Researchers warn that AI chatbots could help criminals break the law quicker than ever by compiling massive amounts of information in seconds. Strise’s co-founder said they got ChatGPT to offer illegal advice by asking questions indirectly or using a “persona.”
OpenAI, the parent company of ChatGPT, responded to the findings by saying that it is always working to make the chatbot “better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity.”
OpenAI maintains that it is aware of the power its technology holds, but asserts it fixes loopholes with updates and requires users to agree to the terms of use before using its technology. The company’s policy warns that an account can be suspended or terminated if violations are found to occur.