The world’s first artificial intelligence (AI) legislation went into effect Thursday, Aug. 1, in the EU. The AI Act, as its known, will regulate how companies develop and use the technology.
The law is facing criticism that it could discourage innovation before it even happens. But the European Commission didn’t pass it overnight. In fact, the law was first proposed back in 2020.
“It’s been drafted for the past few years and ChatGPT happened in the meantime,” Aleksandra Przegalinska, a senior research associate at Harvard University, told Straight Arrow News in July 2023.
After tweaks to adjust to the ever-changing generative AI reality, the commission passed the law in May of this year.
“It’s a regulation that looks at AI from the perspective of risk, mainly,” Przegalinska explained. “It says, okay, most of the applications of artificial intelligence that we have seen so far, we could call them minimal risk; but there are others that are high risk and there is also a way of using artificial intelligence that we would rather ban; like social scoring, for instance, or surveillance systems of different kinds.”
Last year during the State of the EU address, European Commission President Ursula von der Leyen spoke about the need to quickly regulate AI.
“AI is a general technology that is accessible, powerful and adaptable for a vast range of uses — both civilian and military,” von der Leyen said in September. “And it is moving faster than even its developers anticipated. So we have a narrowing window of opportunity to guide this technology responsibly.”
The AI Act separates types of technology into four different categories:
- Prohibited AI systems will be banned as of February 2025. This could apply to AI which tries to predict whether a person might commit a crime based on their characteristics or one that scrapes the internet to bolster facial recognition systems.
- High risk AI systems have the highest regulatory burden outside of those that are outright banned. This includes AI that is used for critical infrastructure like electrical grids, systems that make employment decisions, and self-driving vehicles. Companies with AI that fall into this category will have to disclose their training datasets and prove human oversight.
- Minimal risk systems make up the largest chunk of innovation at about 85%. This is what’s known as “general-use AI.” The category includes generative AI like OpenAI’s ChatGPT or Google’s Gemini. For these types of AI, creators will need to make sure their models are adhering to EU copyright rules and take proper cybersecurity precautions to protect users. It will take effect in 12 months.
- The fourth category is no risk. This is pretty self-explanatory and is for any AI use that doesn’t fall into the other three categories.
“We Europeans have always championed an approach that puts people and their rights at the center of everything we do,” von der Leyen said in a video posted to X. “So with our Artificial Intelligence Act, we create new guardrails not only to protect people and their interests but also to give business and innovators clear rules and certainty.”
While the rules are created to protect citizens of the EU, American tech companies will likely be most affected by it.
In recent years, Microsoft, Google, Amazon, Apple and Facebook-parent Meta have spent massive amounts of money developing AI models.
The rules will be governed by the European Commission’s AI office. A spokesperson for the commission said they will staff around 140 people.
If a company fails to comply with the new rules, it could face fines of $41 million or up to 7% of its global revenue. And the regulatory environment could force these tech giants to make a big decision.
Meta already announced it wouldn’t make its Llama AI model available in the EU. But that’s not because of the AI Act; it was already worried about the bloc’s General Data Protection regulation.
Member states have until August of 2025 to put together bodies that will handle execution of the law in their country.
Meanwhile, companies that already have a commercially available product like ChatGPT will have a 36-month grace period to come into compliance.