Trump to overhaul Biden’s proposed AI policy, promote private sector
The future of artificial intelligence policy in the United States is at a crossroads. President Joe Biden has prioritized safety, transparency and regulation, while President-elect Donald Trump plans to take a more deregulated, innovation-driven approach.
Biden’s executive order on AI safety, announced earlier this year, aims to ensure rigorous oversight of AI systems. The plan includes safety testing for critical sectors such as energy and transportation, strict privacy protections and mandates for transparency to prevent biases and discrimination.
The administration argues that these safeguards are essential to protecting the public and minimizing risks from the rapidly advancing technology.
Getty Images
Trump, however, announced plans to repeal Biden’s executive order and implement a contrasting strategy that prioritizes economic growth and innovation. His administration intends to designate AI hubs across the country, offering tax breaks and incentives to tech companies, and plans to scale up nuclear energy and cloud infrastructure to power advanced AI systems.
He also intends to reduce regulations, arguing that Biden’s framework stifles American companies in the global AI race. Critics of Trump’s plan warn that deregulation could lead to dangerous flaws in AI systems, exacerbate bias or leave the technology vulnerable to misuse.
Proponents argue that fewer restrictions will enable U.S. companies to compete more effectively against China, which has made massive investments in AI technology.
With Trump’s policy details expected in late 2025, the debate over regulation versus innovation is already drawing sharp divides in Washington, D.C. Both approaches have significant implications for the future of AI development, the economy and national security.
The proposed California law would have required safety testing of large AI systems. It would have also given the state’s attorney general power to sue companies over serious harm caused by their tech, and it would have required a sort of “kill switch” that would turn off AI models in case of emergency.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said in a statement explaining his opposition. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”
It’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
For instance, minimal risk systems like OpenAI’s Chat GPT would only need to adhere to transparency provisions and EU copyright laws. But higher risk systems, like AI models that try to predict whether a person might commit a crime, will be fully banned as of February 2025.
These algorithms are becoming a bigger and bigger part of our lives, and I do think it’s time to regulate them.
Patrick Hall, Assistant Professor of Decision Sciences, George Washington University
The following transcript has been edited for length and clarity. Watch the exchange in the video above.
Simone Del Rosario: Patrick, what was it in this bill that the governor of California sent back and how would it have changed the AI landscape in the state?
Patrick Hall: I think that there are a lot of good things on the table for this California bill, in particular, mandatory testing before systems were released; the ability for the government to take enforcement actions when harms do occur related to AI systems; the notion of a kill switch or the ability to turn a system off quickly; whistleblower protections. There were good things there.
I think that the issue was that the focus of the law was on so-called frontier models. And these are sort of the largest AI models developed by the largest AI companies. It’s a very narrow scope. And then also it really only focused on a sort of small aspect of the performance of AI systems that has come to be known, sort of confusingly, as AI safety.
AI safety really concentrates on things like preventing systems from being used to make bioweapons, preventing catastrophic risk, and I think that was where the bill went wrong.
AI can be a dangerous technology, but I think that it’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses. So I think that’s where the bill went wrong: its focus on catastrophic risk.
Simone Del Rosario: Do you agree with the tech companies that said this bill would have stifled innovation because of the things that you would have to do before developing or is that just an excuse that they make?
Patrick Hall: My opinion there is that it is an excuse, but it would certainly have cut into their revenues in terms of these AI systems, which are probably already under a great deal of stress. I try to explain to people that these generative AI systems require industrial-scale investments in computation, tens [to] hundreds of millions of dollars or more. So they’ve already spent a lot of money on these systems. Whenever you have a sort of regulatory burden, that, of course, increases the amount of money that you have to spend. But since we talking about the biggest, richest companies in the world, I do think it’s a little bit of an excuse.
Simone Del Rosario: I am curious: had this bill passed, or if California decides to move forward with different but similar legislation regulating AI when the rest of the country hasn’t, could this change how tech companies operate in the state of California?
Patrick Hall: Certainly you could see tech companies leave the state of California. I’m not sure how realistic that is, though. What tends to happen is almost a different scenario where most of the larger firms would apply the California regulation, or any large state regulation – California, New York, Illinois, Texas – apply the obligations to meet that regulation across the entire United States.
I’d say that’s actually a more likely outcome and perhaps another reason why some of the tech firms did not like this bill is because they knew it would not only affect their behavior and their revenues in California, but it was likely to affect their behavior and revenues throughout the country.
Simone Del Rosario: Let’s extrapolate that out even more because the EU has passed AI regulation, the AI Act, over there. These are multinational companies that have to adhere to rules in the EU. So how does that affect business in America? And how is the proposed regulation in California different from what we see in the EU?
Patrick Hall: One thing that I would like to emphasize is that EU citizens and citizens of other countries with strong data privacy laws or AI regulations really have a different experience online than Americans and and have many more protections from predatory behaviors by tech companies than we as Americans do.
What it boils down to is tech companies are able to extract a lot more data and sort of conduct a lot more experiments on Americans than they are able to on EU citizens and citizens of other countries in the world that have strong data privacy or AI regulations.
I think it’s a fully different online experience in Europe these days than it is in the U.S. The EU AI Act is a fairly different kind of law. It’s a much broader law and it’s a law that doesn’t focus only on so-called frontier models or only on large models. It doesn’t focus only on safety. It focuses on all types of uses of AI, and it has several different risk tiers, where models in different risk tiers or systems in different risk tiers have different compliance burdens. So it’s a much more holistic law.
Simone Del Rosario: Do we need to have an AI act of our own for a federal response to this?
Patrick Hall: It’s a very good question. I think the answer is yes, eventually. AI in 2024 is very data-driven, so it’s very hard to have good AI regulation without good data privacy regulation. The EU is quite far ahead of us in that they have a strong, overarching data privacy regulation, the GDPR, and after they passed that, they were able to pass an AI Act.
Now it doesn’t have to be done in that order. I’m not saying that the Europeans have done everything right. I’m not saying that they won’t stifle innovation. Certainly, they will to a certain degree, but we have a lot of catching up to do as well. We need to start thinking about data privacy and broader regulation of AI, certainly, and those two may have to be done together. It’s just hard to do AI regulation without data privacy regulation because 2024 AI is so data driven.
We as voters need to make it clear to our representatives that these types of regulations are important, and we need to make it clear the harms we’re experiencing, anything from privacy violations to inconveniences to more serious outcomes, more serious negative outcomes.
These algorithms are becoming a bigger and bigger part of our lives and I do think it’s time to regulate them. And I’d also make it clear that we have good models for regulating algorithms on the books in consumer finance and employment decision-making, in medical devices, and any of these would be a better model to start out from then than the sort of, quote-unquote, AI safety direction.
OpenAI’s valuation surges to $157B after latest investment round
ChatGPT-maker OpenAI’s valuation soared to $157 billion on Wednesday, Oct. 2, after a fresh round of funding. This makes it the second most valuable private company in the world behind TikTok-parent company Bytedance and just ahead of SpaceX.
OpenAI just secured $6.6 billion in funding from investors. Some of the names involved include venture capital investors such as Thrive Capital and Khosla Ventures. It also received bigtime corporate support from Microsoft and a new investment from Nvidia, whose chips are crucial to the processing power needed for artificial intelligence.
Abu Dhabi’s global investment fund MGX along with investment firms Altimeter Capital, Fidelity and Softbank also took part in the latest round.
Since ChatGPT became available to the public, OpenAI has boasted 250 million weekly active users. And its valuation has skyrocketed from $14 billion in 2021 to $157 billion in 2024.
Reuters reported in September that the $157 billion valuation “hinges” on changing OpenAI’s corporate structure. This means the nonprofit would no longer be in control, instead the company would be run with a for-profit mindset, and Altman would get a stake in the company that could be worth $10.5 billion.
Apple was reportedly in talks to make an investment in the tech company, according to Reuters. However, it didn’t happen during this round.
Newsom vetoes controversial California AI safety bill
Gov. Gavin Newsom vetoed a landmark California bill that would have established the nation’s first safety regulations for artificial intelligence. The proposed legislation sought to require AI companies to test their systems, publicly disclose safety measures and provide whistleblower protections.
Proponents of the bill argued that it was a necessary step to address the risks AI poses to infrastructure and public safety, with potential threats ranging from manipulating electric grids to creating chemical weapons.
However, Newsom opposed the bill, stating its strict requirements could hinder innovation by imposing regulations even on low-risk AI systems.
California is home to 32 of the world’s top 50 AI companies.
Newsom expressed concern that the legislation could drive developers out of the state. Instead, he announced a partnership with AI industry leaders to create more flexible safety guidelines.
Getty Images
While the veto is seen as a win for tech companies, experts warn that it leaves rapidly advancing AI systems unregulated.
Supporters of the bill, including Elon Musk’s X and AI firm Anthropic, argued that it could have introduced much-needed transparency in an industry that remains largely unregulated.
Critics of the bill feared the regulations might discourage investment and development, particularly in open-source software. Despite the veto, similar efforts are already inspiring lawmakers in other states to explore AI safety measures.
California continues to lead AI development, with the state already working to combat election deepfakes, protect Hollywood workers from unauthorized likeness use and prevent AI-driven discrimination in hiring practices.
Meanwhile, the Biden administration has proposed an AI Bill of Rights, but federal legislation regulating the rapidly growing industry has yet to be introduced.
OpenAI shakeup: Top execs leave as Sam Altman hamstrings nonprofit arm
A lot is going on behind closed doors at OpenAI. Longtime Chief Technology Officer Mira Murati announced she is leaving the company, with two other veteran executives following her out the door. Within hours, it came out that CEO Sam Altman is working to restructure the company into a for-profit endeavor, eliminating the nonprofit arm’s control over the for-profit business.
Is the timing coincidental? Here’s what each of the exiting executives had to say:
“I’m stepping away because I want to create the time and space to do my own exploration,” Murati said.
“It is time for me to take a break. There is no better capstone to my work here than shipping o1 to the world,” Chief Research Officer Bob McGrew said.
“Right now feels like a natural point for me to explore new opportunities outside of OpenAI,” Vice President of Research Barret Zoph said.
Long before talks of restructuring to become a for-profit company with a nonprofit arm – instead of the other way around – ChatGPT-creator OpenAI has been accused of abandoning its nonprofit mission. Co-founder Elon Musk sued the company and Altman over it earlier this year.
Murati was asked about the accusations during a Wall Street Journal interview in March.
“Our mission is to develop artificial general intelligence and deploy it in a way that benefits humanity, and I think, up until now, and certainly our plan moving forward, is to deploy the technology in ways that benefit people and are accessible and very concretely with ChatGPT,” Murati said. “We have done that by having a free tier. That’s certainly our plan moving forward, to have ways. We make this technology easily accessible, freely available to anyone in the world, and we’ll figure out new ways to do that and bring the public along as well.”
OpenAI is certainly entering a new era. Since its founding in 2015, the AI innovator has been a nonprofit. In 2019, OpenAI added a “capped profit” arm that was still controlled by the nonprofit’s board of directors.
In November of last year, the nonprofit board fired Altman, without saying much about why beyond that they lost faith and trust in him. Within days, Microsoft had scooped up Altman and any OpenAI employees who wanted to come with. Hundreds of employees threatened to quit if the board didn’t resign. The board did resign, and Altman came back home and installed a new board of directors.
The conflict at OpenAI has long been about this tug-of-war between its core mission and commercial interests.
“We will need huge amounts of capital to complete our mission, and we have been extremely upfront about that,” Altman said weeks before his firing. “But we will need a lot more money. We don’t know exactly how much. We don’t know exactly how it’s going to be structured; what we’re gonna do. But it shouldn’t come as a surprise because we have said this all the way through. It’s just a tremendously expensive endeavor.”
Today, OpenAI is on a new fundraising round that New York Times sources say could value the company at $150 billion. It would make it the second most valuable private company behind TikTok owner ByteDance.
But Reuters sources say that valuation hinges on changing OpenAI’s corporate structure. This means the nonprofit would no longer be in control, the company would be run with a for-profit mindset, and Altman would get a stake in the company that could be worth $10.5 billion.
Investors would welcome the news of OpenAI running more like a tech startup, but for those troubled about AI safety, there are concerns about whether OpenAI can responsibly govern itself in a world that is still largely unregulated.
Google’s antitrust loss ‘a warning’ to Big Tech: The government can win
Pressure is building on Big Tech after a federal court ruled Google is a monopoly. Google isn’t the only one the government is going after. Apple, Meta and Amazon are actively fighting lawsuits.
While Google’s appeal plays out, tech firms will be eyeing the courts, Federal Trade Commission and Department of Justice for clues to a shift in the regulatory landscape.
For how Google’s ruling might impact current and future antitrust cases, Straight Arrow News interviewed former FTC chair and commissioner Bill Kovacic.
This interview has been edited for length and clarity. Watch the interview in the video above.
Simone Del Rosario: Does this serve as a flashing red light for other Big Tech firms?
Bill Kovacic: It does indeed. They’ve seen the light flashing yellow for several years because, not only in the United States but around the world, we find competition authorities and individual jurisdictions beginning new investigations, initiating cases, and in the case of the European Union adopting new regulatory frameworks, theirs called the Digital Markets Act.
The Big Tech sector has seen gathering storm clouds now for a number of years, going back to, I’d say, the middle of the previous decade. But we’re now seeing the delivery of policy measures that foreshadowed evermore significant forms of intervention. And this is an indication, not only that the government can win, it can marshal the resources to do this kind of work well, it can bring the cases to a successful conclusion at the trial.
It’s a warning that the government can prevail. The government can make well-founded arguments.
Bill Kovacic, former FTC chair and commissioner
But also it means that there will be more to come and there are other significant matters in the pipeline: another Department of Justice case involving Google involving ad serving; a case by the FTC challenging Meta for its acquisition of Instagram 10 years ago; an FTC case against Amazon; a Department of Justice case against Apple; state government cases attacking a number of these large enterprises.
I think for the business community, especially for the tech community, it’s an indication of things to come and that the successful defense of their position is not going to be something they can take for granted.
Simone Del Rosario: How does [the Google ruling] measure up to the Amazon situation where they’re being accused of having self-preference for their own products?
Bill Kovacic: This involves, I think in some ways, a harder case for the Federal Trade Commission. The FTC is arguing that you’ve given your own products, your own services, a better display compared to others, that you’re favoring them. I think the FTC is going to have a somewhat harder time dealing with the argument [of], ‘I’m a successful firm, don’t I have the freedom to offer consumers not only the better product, but to put my product first? To say, look at my product. Why should I have to display the products of my rivals in a better light?’
Amazon would not have unlimited freedom to make certain choices that are going to be the subject of the case. But Amazon’s arguments are arguably more within the framework of Supreme Court jurisprudence that has been encouraging of the ability of dominant firms to decide who they’ll deal with and how they’ll deal. And a concern on the part of judges that they shouldn’t be involved in making technical decisions about how companies operate, determining who they can deal with, the terms on which they can deal with other parties. So I think the FTC in some ways faces a somewhat harder challenge in the light of this existing jurisprudence.
But from Amazon’s point of view, watching the outcome in this first important [Google] case, it’s a warning that the government can prevail. The government can make well-founded arguments. They can present them capably. They’re probably going to be found to be a dominant enterprise and the real question will be, is this self-preferencing behavior acceptable?
I think what all leading firms learn from the experience we’ve just observed is you can take absolutely nothing for granted in this process. And it’s an environment in which judges might well be persuaded that you made an incorrect judgment about where the line of illegality is and you stepped over it. At a very basic level, this is an important caution that says you can lose these cases if you’re a defendant.
Simone Del Rosario: I’m curious what your take is on the types of cases against Big Tech that current FTC Chair Lina Khan has been taking. What do you make of her strategy when it comes to going after Big Tech?
Bill Kovacic: She has put in motion one significant case on her own watch: that’s the Amazon case we mentioned before. The other major case that she has she inherited from the Trump administration. That’s the challenge to Meta for its acquisition of Instagram.
But the Amazon case is a very ambitious case. It is trying to define a new conception of what dominant firms can do, especially dominant firms that act as the owners of a platform on which products are sold, but their own products and the products of other parties operate on the same platform; to identify what a dominant firm can do by way of featuring its own products and perhaps treating the products of third parties on its platform, its competitors, differently.
That would be a significant development in the jurisprudence. I guess to put it in a very general way, it is a riskier case than the case that the DOJ is running against Google, the case that’s running against Apple, the other case that’s running against Google. And this is consistent, I think, with the chair’s philosophy, that a major role of the FTC should be to take on cases that involve more ambiguity, to take on cases that aren’t squarely within a framework where liability has been routinely found, but to move the frontiers outward.
So there’s a greater risk appetite at work there. The DOJ cases are very ambitious as well, but I’d say a signature element of the chair’s own program is to be willing to push the frontiers and to accept the risk that there will be judicial resistance and to accept the risk that there’ll be judicial rejection.
But for the sake of provoking the conversation with the courts and bringing these issues to the courts on a repeated basis, there’s a willingness, not simply in the area of Big Tech, but in other areas of the commission’s jurisdiction, to try to move the frontiers of enforcement outward and to acknowledge and accept the risk that these are hard cases to win. And [she does] not expect to prevail every time, but the very fact of bringing the cases, continuing the conversation with the courts, will have real value.
The law is facing criticism that it could discourage innovation before it even happens. But the European Commission didn’t pass it overnight. In fact, the law was first proposed back in 2020.
“It’s been drafted for the past few years and ChatGPT happened in the meantime,” Aleksandra Przegalinska, a senior research associate at Harvard University, told Straight Arrow News in July 2023.
After tweaks to adjust to the ever-changing generative AI reality, the commission passed the law in May of this year.
“It’s a regulation that looks at AI from the perspective of risk, mainly,” Przegalinska explained. “It says, okay, most of the applications of artificial intelligence that we have seen so far, we could call them minimal risk; but there are others that are high risk and there is also a way of using artificial intelligence that we would rather ban; like social scoring, for instance, or surveillance systems of different kinds.”
Last year during the State of the EU address, European Commission President Ursula von der Leyen spoke about the need to quickly regulate AI.
“AI is a general technology that is accessible, powerful and adaptable for a vast range of uses — both civilian and military,” von der Leyen said in September. “And it is moving faster than even its developers anticipated. So we have a narrowing window of opportunity to guide this technology responsibly.”
The AI Act separates types of technology into four different categories:
Prohibited AIsystems will be banned as of February 2025. This could apply to AI which tries to predict whether a person might commit a crime based on their characteristics or one that scrapes the internet to bolster facial recognition systems.
High risk AI systems have the highest regulatory burden outside of those that are outright banned. This includes AI that is used for critical infrastructure like electrical grids, systems that make employment decisions, and self-driving vehicles. Companies with AI that fall into this category will have to disclose their training datasets and prove human oversight.
Minimal risk systems make up the largest chunk of innovation at about 85%. This is what’s known as “general-use AI.” The category includes generative AI like OpenAI’s ChatGPT or Google’s Gemini. For these types of AI, creators will need to make sure their models are adhering to EU copyright rules and take proper cybersecurity precautions to protect users. It will take effect in 12 months.
The fourth category is no risk. This is pretty self-explanatory and is for any AI use that doesn’t fall into the other three categories.
“We Europeans have always championed an approach that puts people and their rights at the center of everything we do,” von der Leyen said in a video posted to X. “So with our Artificial Intelligence Act, we create new guardrails not only to protect people and their interests but also to give business and innovators clear rules and certainty.”
In recent years, Microsoft, Google, Amazon, Apple and Facebook-parent Meta have spent massive amounts of money developing AI models.
The rules will be governed by the European Commission’s AI office. A spokesperson for the commission said they will staff around 140 people.
If a company fails to comply with the new rules, it could face fines of $41 million or up to 7% of its global revenue. And the regulatory environment could force these tech giants to make a big decision.
Meta already announced it wouldn’t make its Llama AI model available in the EU. But that’s not because of the AI Act; it was already worried about the bloc’s General Data Protection regulation.
Member states have until August of 2025 to put together bodies that will handle execution of the law in their country.