ChatGPT tricked into giving advice on how to get away with crimes: Report
New research is raising concerns about ChatGPT telling people how to get away with serious crimes. Norwegian research group Strise told CNN that it found workarounds to get the AI chatbot to offer tips on things like how to launder money across borders and evade Russian sanctions, which included avoiding bans on weapons sales.
Further adding to worries, a report published by Wired in September revealed a way to “jailbreak” ChatGPT and get it to offer instruction on how to make a bomb.
Researchers warn that AI chatbots could help criminals break the law quicker than ever by compiling massive amounts of information in seconds. Strise’s co-founder said they got ChatGPT to offer illegal advice by asking questions indirectly or using a “persona.”
OpenAI, the parent company of ChatGPT, responded to the findings by saying that it is always working to make the chatbot “better at stopping deliberate attempts to trick it, without losing its helpfulness or creativity.”
OpenAI maintains that it is aware of the power its technology holds, but asserts it fixes loopholes with updates and requires users to agree to the terms of use before using its technology. The company’s policy warns that an account can be suspended or terminated if violations are found to occur.
OpenAI: Cyber actors exploiting ChatGPT to influence elections
OpenAI identified and disrupted more than 20 attempts to use its artificial intelligence models to generate fake content aimed at influencing elections worldwide, the company revealed in a report published Wednesday, Oct. 9. The report highlights how cyber actors, including state-linked entities, have exploited OpenAI’s tools, such as ChatGPT, to create AI-generated articles, social media posts and comments intended to manipulate public opinion.
The 54-page report details efforts to spread misinformation in elections across the United States, Rwanda, India and the European Union.
In one case, an Iranian operation in August used OpenAI’s models to generate long-form articles and comments related to the U.S. election.
OpenAI also took action in July to ban ChatGPT accounts in Rwanda that were involved in posting election-related comments on social media platform X.
Despite these efforts, OpenAI stated that none of the operations were able to gain viral traction or build lasting audiences. The company said it acted quickly to neutralize the attempts, often resolving the issue within 24 hours of detection.
The report comes as concerns grow over the potential use of AI-generated content to interfere in upcoming elections.
OpenAI emphasized the need for greater awareness and vigilance as generative AI becomes more widely adopted. The report noted a significant increase in the creation of deepfakes and other AI-generated content.
There was a 900% rise in such material over the past year, according to data from Clarity, a machine learning firm.
AI tools consume up to 4 times more water than estimated
A new report shows that artificial intelligence tools, including ChatGPT, are using up to four times more water than previously believed. This discovery raises concerns about the sustainability of data centers as AI continues to expand.
Researchers from the University of California, Riverside found that processing 10 to 50 queries on AI chatbots can consume up to 2 liters of water, far exceeding the earlier estimate of half a liter. The increase is attributed to the intense cooling needs of data centers, where the servers generate significant heat.
Getty Images
According to Microsoft, the energy and water demands of AI models are much higher than anticipated. Between 2023 and 2024, Google, Microsoft, and Meta have reported water usage increases of 17%, 22.5%, and 17% respectively, further highlighting the growing environmental footprint of AI.
This is not just a U.S. issue. In the U.K., planned data centers are expected to consume as much water as a city the size of Liverpool. Meanwhile, in Ireland, data centers now account for 21% of the country’s electricity consumption.
Getty Images
OpenAI CEO Sam Altman recently presented a proposal to the White House to build at least five massive data centers, with plans for unprecedented energy expansions. However, critics argue that the energy production process for AI remains inefficient, with 60% of resources wasted.
While tech companies pledge to offset their water usage by 2030, critics warn that these efforts may not sufficiently address water scarcity in regions where AI data centers are located.
The proposed California law would have required safety testing of large AI systems. It would have also given the state’s attorney general power to sue companies over serious harm caused by their tech, and it would have required a sort of “kill switch” that would turn off AI models in case of emergency.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said in a statement explaining his opposition. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
It’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
For instance, minimal risk systems like OpenAI’s Chat GPT would only need to adhere to transparency provisions and EU copyright laws. But higher risk systems, like AI models that try to predict whether a person might commit a crime, will be fully banned as of February 2025.
These algorithms are becoming a bigger and bigger part of our lives, and I do think it’s time to regulate them.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
The following transcript has been edited for length and clarity. Watch the exchange in the video above.
Simone Del Rosario: Patrick, what was it in this bill that the governor of California sent back and how would it have changed the AI landscape in the state?
Patrick Hall: I think that there are a lot of good things on the table for this California bill, in particular, mandatory testing before systems were released; the ability for the government to take enforcement actions when harms do occur related to AI systems; the notion of a kill switch or the ability to turn a system off quickly; whistleblower protections. There were good things there.
I think that the issue was that the focus of the law was on so-called frontier models. And these are sort of the largest AI models developed by the largest AI companies. It’s a very narrow scope. And then also it really only focused on a sort of small aspect of the performance of AI systems that has come to be known, sort of confusingly, as AI safety.
AI safety really concentrates on things like preventing systems from being used to make bioweapons, preventing catastrophic risk, and I think that was where the bill went wrong.
AI can be a dangerous technology, but I think that it’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses. So I think that’s where the bill went wrong: its focus on catastrophic risk.
Simone Del Rosario: Do you agree with the tech companies that said this bill would have stifled innovation because of the things that you would have to do before developing or is that just an excuse that they make?
Patrick Hall: My opinion there is that it is an excuse, but it would certainly have cut into their revenues in terms of these AI systems, which are probably already under a great deal of stress. I try to explain to people that these generative AI systems require industrial-scale investments in computation, tens [to] hundreds of millions of dollars or more. So they’ve already spent a lot of money on these systems. Whenever you have a sort of regulatory burden, that, of course, increases the amount of money that you have to spend. But since we talking about the biggest, richest companies in the world, I do think it’s a little bit of an excuse.
Simone Del Rosario: I am curious: had this bill passed, or if California decides to move forward with different but similar legislation regulating AI when the rest of the country hasn’t, could this change how tech companies operate in the state of California?
Patrick Hall: Certainly you could see tech companies leave the state of California. I’m not sure how realistic that is, though. What tends to happen is almost a different scenario where most of the larger firms would apply the California regulation, or any large state regulation – California, New York, Illinois, Texas – apply the obligations to meet that regulation across the entire United States.
I’d say that’s actually a more likely outcome and perhaps another reason why some of the tech firms did not like this bill is because they knew it would not only affect their behavior and their revenues in California, but it was likely to affect their behavior and revenues throughout the country.
Simone Del Rosario: Let’s extrapolate that out even more because the EU has passed AI regulation, the AI Act, over there. These are multinational companies that have to adhere to rules in the EU. So how does that affect business in America? And how is the proposed regulation in California different from what we see in the EU?
Patrick Hall: One thing that I would like to emphasize is that EU citizens and citizens of other countries with strong data privacy laws or AI regulations really have a different experience online than Americans and and have many more protections from predatory behaviors by tech companies than we as Americans do.
What it boils down to is tech companies are able to extract a lot more data and sort of conduct a lot more experiments on Americans than they are able to on EU citizens and citizens of other countries in the world that have strong data privacy or AI regulations.
I think it’s a fully different online experience in Europe these days than it is in the U.S. The EU AI Act is a fairly different kind of law. It’s a much broader law and it’s a law that doesn’t focus only on so-called frontier models or only on large models. It doesn’t focus only on safety. It focuses on all types of uses of AI, and it has several different risk tiers, where models in different risk tiers or systems in different risk tiers have different compliance burdens. So it’s a much more holistic law.
Simone Del Rosario: Do we need to have an AI act of our own for a federal response to this?
Patrick Hall: It’s a very good question. I think the answer is yes, eventually. AI in 2024 is very data-driven, so it’s very hard to have good AI regulation without good data privacy regulation. The EU is quite far ahead of us in that they have a strong, overarching data privacy regulation, the GDPR, and after they passed that, they were able to pass an AI Act.
Now it doesn’t have to be done in that order. I’m not saying that the Europeans have done everything right. I’m not saying that they won’t stifle innovation. Certainly, they will to a certain degree, but we have a lot of catching up to do as well. We need to start thinking about data privacy and broader regulation of AI, certainly, and those two may have to be done together. It’s just hard to do AI regulation without data privacy regulation because 2024 AI is so data driven.
We as voters need to make it clear to our representatives that these types of regulations are important, and we need to make it clear the harms we’re experiencing, anything from privacy violations to inconveniences to more serious outcomes, more serious negative outcomes.
These algorithms are becoming a bigger and bigger part of our lives and I do think it’s time to regulate them. And I’d also make it clear that we have good models for regulating algorithms on the books in consumer finance and employment decision-making, in medical devices, and any of these would be a better model to start out from then than the sort of, quote-unquote, AI safety direction.
OpenAI’s valuation surges to $157B after latest investment round
ChatGPT-maker OpenAI’s valuation soared to $157 billion on Wednesday, Oct. 2, after a fresh round of funding. This makes it the second most valuable private company in the world behind TikTok-parent company Bytedance and just ahead of SpaceX.
OpenAI just secured $6.6 billion in funding from investors. Some of the names involved include venture capital investors such as Thrive Capital and Khosla Ventures. It also received bigtime corporate support from Microsoft and a new investment from Nvidia, whose chips are crucial to the processing power needed for artificial intelligence.
Abu Dhabi’s global investment fund MGX along with investment firms Altimeter Capital, Fidelity and Softbank also took part in the latest round.
Since ChatGPT became available to the public, OpenAI has boasted 250 million weekly active users. And its valuation has skyrocketed from $14 billion in 2021 to $157 billion in 2024.
Reuters reported in September that the $157 billion valuation “hinges” on changing OpenAI’s corporate structure. This means the nonprofit would no longer be in control, instead the company would be run with a for-profit mindset, and Altman would get a stake in the company that could be worth $10.5 billion.
Apple was reportedly in talks to make an investment in the tech company, according to Reuters. However, it didn’t happen during this round.
OpenAI to grant authors access to training data in landmark copyright case
OpenAI will allow authors suing the company to inspect data used to train its artificial intelligence models in an ongoing copyright lawsuit. This marks the first time the AI firm has agreed to provide access to this information, potentially setting the stage for a pivotal legal battle over the use of copyrighted works in AI development.
The lawsuit, brought by authors including Sarah Silverman, Paul Tremblay and Ta-Nehisi Coates, alleges that OpenAI used their copyrighted works without permission to train its AI system, ChatGPT.
The authors claim their books were taken from online sources and used to generate summaries of their work.
Invision
As part of an agreement, OpenAI will allow the authors’ representatives to inspect the data at the company’s San Francisco office.
The review will take place under strict security measures, including a no-internet policy and the prohibition of recording devices. Reviewers must sign non-disclosure agreements and will have limited use of a computer for note-taking, under the supervision of OpenAI.
Getty Images
The lawsuit is one of several high-profile cases against AI companies accused of using copyrighted material to train machine learning models.
OpenAI has previously stated that its systems are trained using publicly available datasets, which may include copyrighted works. The company may argue that this practice falls under fair use, a legal doctrine that allows limited use of copyrighted material under certain conditions.
A U.S. court previously dismissed some of the authors’ claims, including allegations of unfair business practices and negligence. However, their claim of direct copyright infringement remains active.
The outcome of the case could set important legal precedents for the future of AI and the use of copyrighted material in training data.
Amazon faces challenges in AI race amid growing competition
Amazon is working to regain its position in the rapidly advancing artificial intelligence landscape as it faces increasing competition from tech giants like Microsoft, Google and OpenAI. While Amazon’s Alexa voice assistant once dominated the market, the release of OpenAI’s ChatGPT has reshaped the AI race, leaving the company working to catch up.
Amazon’s AI team has focused on developing large language models and generative AI technology to enhance Alexa and other products. However, the company’s efforts have encountered significant challenges.
Getty Images
Delays in development, shifting priorities, and tight timelines have slowed progress. As new AI models were integrated into Alexa, the assistant became less reliable for basic tasks, like turning on lights, which it once performed with over 90% accuracy.
Amazon remains optimistic, expecting AI to drive significant revenue growth in the coming years, particularly through its cloud services.
The company has also invested in partnerships and AI startups to bolster its capabilities. Despite these efforts, competitors like Google and Apple have introduced advanced AI assistants, intensifying the pressure on Amazon to innovate.
Amazon is set to reveal an upgraded version of Alexa as early as October, with new conversational abilities and enhanced functionality. However, questions remain about whether the company can close the gap with its rivals in the competitive AI landscape.
Most US teens using AI without parents’ knowledge: Survey
Teens have always been ahead of the curve when it comes to learning new technology, and artificial intelligence is no exception. Now, new research shows just how proficient teens are with AI, and how little parents and other adults really know about it.
In a recent survey of more than a thousand U.S. parents and their teens, the nonprofit Common Sense Media found 70% of the teens they surveyed had used some form of AI, but only a little more than a third (37%) of their parents knew about it. Nearly a quarter (23%) of parents believed their kids had not used AI at all.
While teens mess around with AI chatbots to stave off boredom or use search engines with AI-generated results, the survey found teens are most commonly using the technology for help with their homework. Over half (53%) of teens were using AI for help with assignments.
Common Sense Media found a pretty even split between teens whose teachers gave them permission to use AI for help with assignments and those who didn’t get permission.
While AI can definitely come in handy, experts with Common Sense Media say parents should talk to their kids about the pitfalls associated with it — like its tendency to be more biased than humans or to share misleading or flat-out wrong information. They say it might also be difficult for teens to realize when using AI crosses the boundary into cheating.
New ChatGPT model designed to break down problems, ‘think’ before responding
Oprah Winfrey recently aired a special all about AI, titled “AI and the Future of Us.” After noting that “AI is still beyond our control,” Winfrey said humans will be able to adapt, saying, “The stakes could not be higher.”
The comments came with guests including OpenAI CEO Sam Altman, tech influencer Marques Brownlee and current FBI Director Christopher Wray. Winfrey noted that she was ready for what AI could do for humanity, as the new o1 model is released for ChatGPT.
Nicknamed “Strawberry,” the new model is designed to think before responding. With hype around the new model, there is already debate on whether it lives up to those abilities.
Living up to the hype may be important for Altman, as this model is four times more expensive than the previous model GPT-4o. OpenAI admits that the older version may still be a better option for most everyday tasks.
What sets the new model apart is the ability to break down big problems into smaller steps and “think” through each one. This new feature can be powerful for tackling complex questions, but it’s not always the right fit for simpler tasks.
One user tested ChatGPT o1 by asking for help planning a Thanksgiving dinner for 11 people. The model paused to think, then gave a detailed 750 word response. It even suggested renting a portable oven, showing that it can give advice for a complex scenario.
However, when given a simple question, it delivered an 800 word essay. Given the same question, GPT-4o answered in just a few sentences.
That is where cost comes into play. The new ChatGPT model charges for what are called “reasoning tokens,” the extra steps it takes when thinking through a problem, which can quickly add up. This makes it less practical for quick, straightforward questions.
Experts are intrigued by the model’s potential for solving complex issues through step-by-step reasoning, but even Altman is managing expectations. The new o1 model is now available, and Altman said they are planning to bring a miniature version to all ChatGPT users for free.
OpenAI said it plans to continue to develop new models in the GPT series.
AI has become more common in education, and a new study asks the question, does it help students learn? The study’s authors conducted an experiment with high school students, and the findings suggest the answer is not simple.
Researchers at the University of Pennsylvania tested Turkish high school students using ChatGPT on math practice problems. Students who had access to the tool during practice solved 48% more math problems correctly, but scored 17% lower on the subsequent test compared to those without it.
A third group used a modified ChatGPT, which acts at a tutor, offering hints instead of direct answers. They did much better on practice problems and were able to solve 127% more of them. But these students still did not perform any better on tests than normal.
The study suggests students rely too much on AI as a crutch, asking for answers instead of solving problems themselves. ChatGPT’s inaccuracies didn’t help either, as the tool was wrong about 42% of the time in its problem-solving approach.