The future of artificial intelligence policy in the United States is at a crossroads. President Joe Biden has prioritized safety, transparency and regulation, while President-elect Donald Trump plans to take a more deregulated, innovation-driven approach.
Biden’s executive order on AI safety, announced earlier this year, aims to ensure rigorous oversight of AI systems. The plan includes safety testing for critical sectors such as energy and transportation, strict privacy protections and mandates for transparency to prevent biases and discrimination.
The administration argues that these safeguards are essential to protecting the public and minimizing risks from the rapidly advancing technology.

Trump, however, announced plans to repeal Biden’s executive order and implement a contrasting strategy that prioritizes economic growth and innovation. His administration intends to designate AI hubs across the country, offering tax breaks and incentives to tech companies, and plans to scale up nuclear energy and cloud infrastructure to power advanced AI systems.
He also intends to reduce regulations, arguing that Biden’s framework stifles American companies in the global AI race. Critics of Trump’s plan warn that deregulation could lead to dangerous flaws in AI systems, exacerbate bias or leave the technology vulnerable to misuse.
Proponents argue that fewer restrictions will enable U.S. companies to compete more effectively against China, which has made massive investments in AI technology.
With Trump’s policy details expected in late 2025, the debate over regulation versus innovation is already drawing sharp divides in Washington, D.C. Both approaches have significant implications for the future of AI development, the economy and national security.