Artificial intelligence is already the next big thing, from OpenAI’s ChatGPT to Google’s Bard AI. But as generative AI platforms increasingly get in the hands of everyday people, could the technology end up on the wrong side of a lawsuit? Here is the legal action threatening AI in this week’s Five For Friday.
#5: Defamation
Chatbots have a tendency to make up facts. They are trained with a ton of data, but when given prompts, bots like ChatGPT don’t scrub the internet for new information. As a result, the AI sometimes convincingly spews sentences out of thin air, including quotes. This has journalists, lawyers and academics chatting about whether AI chatbots can be guilty of defamation. It would be pretty tough to prove a computer acted with malicious intent in spreading lies, but will the developers behind the AI platforms be held accountable?
#4: Intellectual Property Infringement
Character.AI lets users chat with their favorite fictional characters and real-life celebrities. Generative AI chatbots like this use available data in the ether to figure out how someone might respond. It can emulate text-based conversations with the likes of Kanye West or Keanu Reeves, among others. But it gets a little more legally murky when it offers the chance to converse with fictional characters like Luke Skywalker or Spider-Man. Is Disney going to continue to allow Character.AI, which is valued around $1 billion, to use its intellectual property, especially when it has no control over what is said?
#3: Deepfake
Machine learning has made way for deepfake technology, which allows users to take an existing video, replace the face and even replicate someone’s voice. These have been making waves on social media with videos featuring Tom Cruise and a number of politicians. But there are even more nefarious actors out there who use the AI to make pornographic material. As of right now, few states have laws on the books that specifically address the technology. In the week of March 19, 2023, TikTok said it won’t allow deepfakes of any “real private figure,” but it appears public personalities are still fair game, as long as it’s disclosed and they aren’t making any endorsements.
#2: Malpractice
Is it possible for AI to be guilty of malpractice? That’s a question being asked as part of a proposed class action lawsuit against DoNotPay. The company made headlines when it offered $1 million to anyone willing to use its “robot lawyer” to argue a case in front of the Supreme Court. Straight Arrow News even talked with DoNotPay CEO Josh Browder after he made the offer. But then he said he received “threats from state bar prosecutors” when he tried to use the AI lawyer in traffic court. The class action suit stems from a man that used the online service for a number of legal documents, claiming he received “substandard and poorly done results.” Concerns over AI malpractice can only get worse as people use the tech for medical advice, and who knows where it goes if AI therapy takes off.
#1: Copyright
Generative AI has to be fed a ton of information to be able to respond to user input and this can be a copyright law minefield. Getty Images is already suing Stability over its art crafted by AI Stable Diffusion. Getty’s suit claims the company scraped 12 million of its high-quality imagery to train the AI. Getty sells those images at a premium and has already made licensing deals to a number of AI companies. Artists also sued Midjourney, a similar AI platform that was used to generate images of every American president, but they look “cool” and have a mullet. Take a look at all of that glory instead of focusing on the people out there making fake AI images of former President Donald Trump being arrested.