Cruz’s ‘Take It Down Act’ will force social media to pull deepfake intimate images
Elliston Berry was a 14-year-old high schooler when another student used AI to create deepfake sexually explicit images of her and other girls, then posted them on Snapchat. It ultimately took a phone call from Sen. Ted Cruz, R-Texas, to get the images removed from the social media app.
“I dreaded school and was anxious to even step foot on campus,” Elliston said during a press conference on Capitol Hill Tuesday, June 18. “Although the student left school, which allowed me to feel more confident about attending classes, there will always be a fear that these photos will resurface.”
According to Elliston’s mother, the perpetrator acted with malice and intended to “ruin” the girls. He was charged with sale, distribution, and display of harmful material to a minor and given probation.
“When he turns 18, his record will be expunged,” Anna McAdams, Elliston’s mother, said. “He will walk away unscathed. However, our girls will forever live in fear that when they apply for a job or college these pictures might resurface. There needs to be consequences for what he did.”
Cruz introduced the Take it Down Act to create harsher penalties for both the perpetrator and tech companies if they don’t remove the content.
Those convicted of creating and posting the intimate material could be sentenced to two years in prison if it’s an image of an adult and three years if it’s an image of a child.
The bill would also require social media companies to remove the images within 48 hours after receiving a request from a victim. If the company fails to make a good faith effort, it can be considered an unfair or a deceptive act under the Federal Trade Commission Act, which is enforced by the FTC.
“It can be maddening dealing with big tech and trying to get these images, these fake images of your child, taken down,” Cruz said. “And Big Tech, over and over again, has demonstrated an arrogance, an imperiousness, a lack of accountability.”
Cruz said the bill is partially modeled off U.S. copyright laws because when someone posts copyrighted material on a platform, it is taken down almost immediately and the user can be subject to a lifetime ban.
“If you upload a clip from Disney or pop song, it’s gone in a heartbeat, risking an immediate ban from the platform,” Dawn Hawkins, CEO of the National Center on Sexual Exploitation, said. “Yet if you upload a rape, or hide a camera in a locker room or create a deep fake pornographic image, you can do so without scrutiny.”
States have their own laws protecting people from non-consensual intimate imagery, but only 20 state laws specifically deal with deepfake images. This bill will cover deepfakes on the national level and adds the requirement that social platforms remove the content.
“Think back to what it’s like to be in junior high and high school, and that insecurity, that discomfort that we all have in our own skin, and then ask you to consider what these girls have been through,” Sen. Cynthia Lummis, R-Wyo., said.
Multiple bills addressing non-consensual intimate images have been brought forward for consideration. It is unclear which, if any, have enough support to become law. Cruz’s bill has 12 bipartisan co-sponsors. Whichever bill ultimately moves forward will likely have parts of each proposal and become part of a larger package to protect kids online.
Controversial deepfake Kari Lake video shows ease of AI disinformation
The Arizona Agenda, a newsletter that covers state politics, created a video using artificial intelligence to manipulate footage and make it seem as if Kari Lake was endorsing its coverage. The video showcased AI’s capabilities to create convincing videos that could blur the line between reality and fiction.
Stan Barnes, president of Copper State Consulting Group, told local media outlet AZFamily, “I think the Arizona Agenda, the media outlet that put that video into the public space, did everyone a favor.”
The video shows Lake’s AI counterpart appearing to support the Arizona Agenda. She also talks about the impact of AI on future elections.
The goal of making the video, according to its creators, was to demonstrate how easy it is to make fake content with AI and to highlight the challenges this brings, especially when it comes to telling the difference between real and fake videos around important events like elections.
Understanding how artificial intelligence creates these fake videos can help people notice the small details.
“Now that you know this is a deep fake, you’re probably catching a bunch of little inconsistencies that you can’t quite put your finger on,” deepfake Kari Lake said. “This is a less refined version of me, to help illustrate. My voice is pretty good, right? But my lips don’t quite sync up, my cadence isn’t natural, my skin is a little too smooth, and around the boundaries of my face, you can almost see the little glitches in the Matrix.”
The Arizona Agenda didn’t just create a deepfake; it also made a guide to help people spot fake AI content, showcasing how important it is to be aware of the dangers as AI technology becomes more accessible.
Lake’s team asked for it to be removed, and threatened to take legal action if the video wasn’t taken down. The team said it was created without her permission and used to make money. In response, the Arizona Agenda stressed that its goal was to educate and engage people in a responsible way.
Georgia lawmaker creates deepfake of colleague to garner support for AI bill
Rep. Brad Thomas, R-Ga., proposed legislation aimed at banning the use of deepfakes in politics. Deepfakes utilize artificial intelligence to manipulate audio and video, raising concerns about their potential to mislead voters.
To garner support from lawmakers, Thomas presented a case to the Judiciary Committee by showcasing a deepfake video featuring the voices of Georgia state Sen. Colton Moore, R, and former Republican congressional candidate Mallory Staples. Both of the deepfaked politicians oppose the legislation, citing free speech and satire.
The video falsely endorsed the proposed bill and Thomas emphasized the urgency of addressing this issue to prevent abuses in future elections. Thomas stressed how easily accessible these AI tools are, warning that their sophistication outpaces current legislation.
Following deliberation, the bill received bipartisan support, passing out of committee with an 8-1 vote.
Violators of the law would face penalties of prison time and fines.
Thomas acknowledged the challenges of enforcing the law but expressed confidence in the collaboration between law enforcement agencies to address election-related fraud.
Tennessee’s new ELVIS Act protects musicians from AI impersonations
Tennessee Gov. Bill Lee, R, signed a groundbreaking law Thursday, March 21, designed to shield artists from unauthorized artificial intelligence impersonations. The Ensuring Likeness Voice and Image Security, or the ELVIS Act, addresses growing concern among artists about deepfake technology and AI impersonations that mimic their own voices.
This law recognizes an artist’s voice as a protected personal right and sets stricter guidelines on the use of someone’s name, image and appearance.
Tennessee Governor’s Office
“The really great thing about this is Tennessee is the first in the nation to enact this legislation,” Lee said at the signing. “This will be a blueprint and we expect that it will be enacted multiple times over multiple states and, at some point, artists all across America will be protected because of what started here in the music capital of the world. We will ensure that no one can steal the voices of Tennessee artists and I believe that what we’re doing here today will ensure that no one will steal the voices of American artists once this is enacted across the country.”
The law has support from the music community. Lee, alongside stars Luke Bryan and Chris Janson, signed the act at a local music venue called “honky-tonk,” calling it “the coolest bill signing ever.”
AP Images
“What an amazing, stance or, precedent to set for the state of Tennessee to get in front of this to be the leaders of this and to show artists like myself, current artists, artists that are moving here, following their dreams, to know that our state protects us and what we’re about and what we work so hard for,” Bryan said.
“From Beale Street to Broadway, to Bristol and beyond, Tennessee is known for our rich artistic heritage that tells the story of our great state,” Lee added. “As the technology landscape evolves with artificial intelligence, I thank the General Assembly for its partnership in creating legal protection for our best-in-class artists and songwriters.”
The bill also received backing from the music industry and the Human Artistry Campaign, a worldwide effort by entertainment groups advocating for a thoughtful use of AI.
“This incredible result once again shows that when the music community stands together, there’s nothing we can’t do,” Mitch Glazier, Recording Industry of America (RIAA) chairman and CEO said. “We applaud Tennessee’s swift and thoughtful bipartisan leadership against unconsented AI deepfakes and voice clones and look forward to additional states and the U.S. Congress moving quickly to protect the unique humanity and individuality of all Americans.”
The ELVIS Act updates the Personal Rights Protection Act of 1984, which was first enacted to protect Elvis Presley’s publicity rights posthumously.
AI-generated misinformation deemed greatest threat in 2024 election year
The internet gives users the ability to upload content and connect with millions of people all over the world. However, this ability to connect also makes users susceptible to deepfakes — media that appears real but is manufactured. With the help of artificial intelligence, deepfakes are looking more realistic than ever.
The 2024 election will be the first U.S. presidential election since AI became widely available to the general public.
GOP frontrunner and former President Donald Trump has been a victim of AI misinformation before and is now facing another deepfake scandal following the release of more Epstein documents.
As the public anticipated who would be named in the files and connected to sex trafficker Jeffrey Epstein, some people seized the opportunity to manipulate copies of the documents using AI, adding new names to the documents that were never there.
While Trump was named in the actual documents, he was not accused of any wrongdoing.
Photos manipulated and constructed with AI also surfaced, including some placing Trump on Epstein’s plane, surrounded by young girls. The images were shared on social media, and can still be found on X.
Both Trump and President Joe Biden have called for some sort of AI controls.
Trump shared news of Ruffalo’s mistake on his social media platform Truth Social, saying, “This is A.I., and it is very dangerous for our Country!” and “Strong Laws ought to be developed against A.I. It will be a big and very dangerous problem in the future!”
This is a similar sentiment to what Biden said when he signed an executive order on mitigating risks connected to artificial intelligence.
“We face a genuine inflection point in history, one of those moments where the decisions we make in the very near term are going to set the course for the next decades,” Biden said. “And with the position we lead the world, the toughest challenges are the greatest opportunities.”
According to a new Davos report released Jan. 10, people agree that “false and misleading information” exacerbated by artificial intelligence is the “top immediate risk to the world.”
Generative AI threatens 2024 elections; false Israel-Hamas images spread
The tech world is preparing for what some say has already begun disrupting democratic processes: Artificial intelligence. More specifically, the focus is on generative AI, a type of AI that creates fake, but convincingly realistic images, audio and text.
At the Reuters NEXT conference in New York during the week of Nov. 5, AI entrepreneur and professor emeritus of psychology and neural science at New York University, Gary Marcus, emphasized that the peril AI poses to democracy stands out as the most substantial risk.
“There are a lot of elections around the world in 2024, and the chance that none of them will be swung by deepfakes and things like that is almost zero,” Marcus said.
Politicians have been particularly vulnerable to these threats. Meta has taken preemptive measures by deciding to prohibit advertisers from utilizing its generative AI for political ads on Facebook and Instagram.
Starting next year, the use of third-party AI software for political, electoral, or social ads will require disclosure. Failure to comply may lead to ad rejection, and repeated violations could incur penalties.
While the detection of deepfakes has historically been imperfect, DeepMedia claims its product performs with 99% accuracy in its detection.
“The thing that makes our deepfake detection highly accurate, really fast and easy to use, is the fact that we both do generation and detection, these are kind of two sides to the same coin,” COO and co-founder Emma Brown said.
Brown cautioned against focusing solely on entirely fabricated content, noting instances where only a brief segment of a video is manipulated. She emphasized the difficulty in detecting such alterations, even for highly trained analysts, making it a critical concern.
“One thing that we’ve found is, you know, there are certain situations where only three seconds of a video are faked, and it might be a 20-minute video, and it might change the meaning of something,” Brown said. “But it’s only three seconds.”
Beyond the domestic effects, deepfakes are further complicating international issues.
“One of the things that we’re doing is we’re working directly with platforms to make sure that it’s integrated for all users,” Brown said. “And we’ve actually recently come out with a Twitter bot in response to Israel, Hamas.”
Recent revelations about Adobe selling AI-generated images depicting scenes of war, including explosions and destroyed homes in Gaza, further underscore the challenges. Adobe used a label to indicate the images were generated with AI.
Experts, including Brown, anticipate that the prevalence of deepfakes will only increase, flooding social media platforms with more manipulated video and audio content.
Shutdown averted but Congress faces new challenges: The Morning Rundown, Oct. 2, 2023
With a government shutdown averted for now, Congress faces new challenges. And an A-list celebrity is putting out a warning about artificial intelligence. These stories and more highlight The Morning Rundown for Monday, Oct. 2, 2023.
Congress passes stopgap funding bill to avert shutdown; McCarthy is called out
Congress was able to avert a government shutdown by passing a stopgap funding bill late Saturday, Sept. 29, 2023. A shutdown would have meant millions of federal employees not being paid, but, for now, that is not the case. The bill, signed by President Joe Biden before the deadline hit, funds the government through Nov. 17, meaning Congress will have to find a way to pass another funding bill in just a matter of weeks.
This stopgap bill first passed the Republican-led House where it found more support from Democrats than Republicans in a 335 to 91 total vote. While the bill increases federal disaster assistance by $16 billion, the amount Biden was seeking, it does not provide any additional aid to Ukraine which was a White House priority opposed by many Republicans.
On Sunday, Oct. 1, the president pressed Congressional Republicans to back a bill for that very cause, saying he expected Speaker Kevin McCarthy to keep his commitment to secure the funding.
“We’re going to get it done. I can’t believe those who voted for supporting Ukraine, the overwhelming majority of the House and Senate, Democrat and Republican, will, for pure political reasons, let more people die needlessly in Ukraine,” Biden said.
When asked by CNN, McCarthy’s office declined to say whether he gave the president any confirmation on a future Ukraine deal. Meanwhile, McCarthy is facing opposition from his own party over the bill that had majority support from Democrats. Representative Matt Gaetz of Florida said on Sunday that he would try to remove the speaker from his leadership position.
“Speaker McCarthy made an agreement with House conservatives in January and since then he’s been in brazen, material breach of that agreement. This agreement that he made with Democrats to really blow past the spending guardrails we had set up is a last straw,” Gaetz said.
Gaetz announced he would be filing a motion to vacate the chair. McCarthy has responded – saying, “So be it. Bring it on. Let’s get over with it and let’s start governing.”
Trump says he will attend opening of NY civil fraud trial
“I’m going to court tomorrow morning to fight for my name and reputation,” Trump posted on Truth Social on Sunday.
Security preparations were already underway in case the former president was to make an appearance.
The case was brought last year by New York Attorney General Letitia James against Trump, his eldest sons and his companies accusing them of inflating the former president’s net worth.
Last week, the judge overseeing the case issued his first ruling in favor of the attorney general, finding Trump liable for fraud. The judge said Trump misrepresented his wealth to banks for decades for as much as $3.6 billion.
According to court records, Trump is expected to be called as a witness later in the trial.
Newsom to appoint Butler to fill Feinstein’s seat
California Governor Gavin Newsom has announced his choice to fill the Senate seat of late Democratic Senator Dianne Feinstein, the longest-serving female senator in U.S. history who passed away last week at the age of 90.
The governor has chosen Laphonza Butler, the president of EMILYs List, a committee that works to elect Democratic women, and a former adviser to Vice President Kamala Harris.
Newsom issued a statement late Sunday night, after news of the appointment broke, saying Butler “represents the best of California and…will carry the baton left by Senator Feinstein, continue to break glass ceilings, and fight for all Californians in Washington D.C.”
The governor had previously said that he would fill any Senate vacancy with a Black woman. Butler would become the third Black woman to serve in the Senate’s history.
Newsom said the choice is an interim appointment and he would not select any of the candidates who are running to succeed Feinstein in 2024. They include Reps. Barbara Lee, Adam Schiff and Katie Porter.
The Supreme Court begins its new term Monday. The nine justices, six conservative and three liberal, are prepared to tackle several major issues over the next nine months including gun rights, social media, the power of federal agencies, electoral districts and, perhaps, abortion pills.
On Oct. 31, the court will hear arguments concerning whether the First Amendment prohibits public officials from blocking critics on social media sites like Facebook and X.
On Nov. 7, a case will be presented to the court on whether a federal law barring people under domestic violence restraining orders from owning a gun violates the Second Amendment’s right to keep and bear arms.
In addition, the Biden administration has asked the justices to hear its appeal to a ruling by the 5th U.S. Circuit Court of Appeals in New Orleans on barring telemedicine prescriptions and shipments by mail of the abortion pill mifepristone.
Federal student loan payments resume after 3-year pause
Federal student loan payments resumed on Sunday, Oct. 1, after a three-year pause due to the COVID-19 pandemic. Interest on the loans began accruing again on Sept. 1.
Borrowers will receive a bill saying how much they owe each month, at least 21 days before their due date. There is a yearlong grace period to help borrowers – meaning missed or late payments in the next 12 months won’t be reported to the credit bureaus, but interest will continue to accrue.
Borrowers also have the option to sign up for the new income-driven repayment program, called SAVE, which was announced after the Supreme Court struck down the Biden administration’s loan forgiveness plan. According to the Education Department, the SAVE plan will help the typical borrower save more than $1,000 per year on payments.
If you need more information on your loans, you can log onto the studentaid.gov.
Tom Hanks warns ‘beware’ AI version of himself
Oscar-winner Tom Hanks sent a warning to his fans not to believe everything they see. He posted to Instagram on Sunday an image seemingly of himself but with the caption: “Beware!”
It turns out it was not an actual photo of the “Forrest Gump” actor, but a computer-generated one made from artificial intelligence.
Hanks’ caption went on to explain further, saying, “There’s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.”
Hanks has previously spoken about the challenges actors face when it comes to AI on a podcast earlier this year.
“I could be hit by a bus tomorrow and that’s it, but performances can go on and on and on and on. Outside of the understanding of AI and deepfake, there’ll be nothing to tell you that it’s not me and me alone. and it’s going to have some degree of lifelike quality. That’s certainly an artistic challenge but it’s also a legal one,” Hanks said on The Adam Buxton Podcast.
The interview was done months before Hollywood actors went on strike in July, with artificial intelligence being one of its key issues. SAG-AFTRA and the Alliance of Motion Picture Television Producers are expected to meet Monday for the first formal bargaining talks.
Our business correspondent Simone Del Rosario has a series on the rise of artificial intelligence and what the future holds for the technology.
Is it alive? How AI’s uncanny valley could threaten human interaction
The uncanny valley as a concept has been around for decades. But as artificial intelligence develops, technology is several steps closer to tricking human brains and manipulating emotions.
The term uncanny valley is used to describe the emotional response from humans when encountering robots that appear human-like. AI expert and philosopher Aleksandra Przegalińska said humans have a common biological response to this interaction: an eerie sensation.
We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance… But then there’s suddenly a glitch.
Aleksandra Przegalińksa, AI senior research associate, Harvard University
“In the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic,” Przegalińska said.
In the video above, she details how encounters with human lookalikes could make people afraid of actual human interaction.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.Watch the full conversation here.
Below is an excerptfrom that interview.
Simone Del Rosario: I was hoping that you could explain for me this concept of the uncanny valley. I’ve heard you talk on it before and I just thought it was a really fascinating look at where people should be designing AI versus where they should be steering away from.
Aleksandra Przegalińska: This is a concept that my team and I have been researching for the past couple of years. It was mainly focused on building robots and how not to build them.
The uncanny valley is this concept that tells us that if something resembles a human being but not fully, then we are scared of it. So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.
I’m not sure if you’re familiar with a robot called Sophia. It’s very, very popular on social media and it gives you that sensation or effect of the uncanny valley — just sort of very confusing to figure out whether you’re really talking to something that is alive or not. Is it healthy or is it sick? What’s going on with it? Why is the mimic so weird? Why are the eyes rolling so slowly?
So it does resemble a human, but then again, it’s not a human. And that is interesting because now in the era of deepfakes and also in the context of the fact that we are mostly interacting with the digital world, not necessarily with physical robots, this uncanny valley idea is very, very problematic.
We do see avatars that look almost exactly like humans, where that immediate response of your body is just acceptance. You’re seeing something that looks like a human and it talks and it’s all good. But then there’s suddenly a glitch and that glitch is that moment when you realize that this may not be a human.
Then who knows? Maybe in the future, when there will be more deepfakes, we will become very cautious and afraid of interactions with others because it will be very hard to classify who is it that we’re dealing with.
ChatGPT launched an AI revolution. Here’s where we stand nearly 1 year on.
Artificial intelligence hit the mainstream like a firestorm following the release of OpenAI’s ChatGPT. Technology companies scrambled to join the AI arms race, led by Microsoft’s $10 billion investment in OpenAI. At the same time, Capitol Hill sprang into action, holding hearing after hearing over safety and regulation.
The overnight sensation of generative AI is not likely to burn out as quickly as it came on. The endless possibilities are expected to transform technology, the workforce and society at large. At this pivotal juncture, humans will shape where artificial intelligence goes from here, but many fear the direction it will take.
AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness.
Przegalińska is a senior research associate at Harvard University analyzing AI, robots and the future of work. She has a doctorate in the philosophy of artificial intelligence from the University of Warsaw and is an associate professor at Kozminski University.
The AI Revolution
Interest in artificial intelligence exploded when ChatGPT first hit the masses in November 2022. While AI has technically been around for decades, the sheer accessibility of directly interacting with a chatbot led to a surge in chatter, as evidenced by Google search trend data.
But it wasn’t just talk. Companies were quick to put money on the table. Nothing comes close to Microsoft’s $10 billion OpenAI investment, but tech companies, health care firms and venture capitalists were quick to ink their own deals in the first quarter of 2023. Microsoft’s move also triggered an AI search-engine race, pushing Google to release Bard, its experimental AI-powered search tool.
The Fear Factor
As humans reckon with the future of artificial intelligence capabilities, Aleksandra Przegalińska, a doctorate in the philosophy of AI, says the most prevalent emotion is fear.
It is mostly a story that is infused with fear, with a sense of threat; where artificial intelligence can reach a level where it figures out that it’s also as smart as we are, perhaps even smarter, and then becomes our enemy. And I think it’s in many ways a story about our history.
Aleksandra Przegalińska, AI expert
Przegalińska said many factors play into this fear, from movies like “The Terminator”to fear spread by AI developers themselves.
This past spring, AI leaders and public figures attached their names to the following statement. Key names that signed on include OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Google DeepMind CEO Demis Hassabis and Bill Gates.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Center for AI Safety
“Sam Altman is obviously telling the Congress that we should all be scared but then again, he’s incubating GPT-5 as we speak,” Przegalińska said. “This to me seems a bit strange. Either you say, ‘Okay, there is a chance that this technology will be misused and this is the way I would like to address these concerns,’ or you’re saying, ‘Well, it’s ultimately the worst thing that can happen to humanity and I just simply stop building it at all.’”
I think maybe he has some regrets being left with Twitter instead of surfing this big AI wave.
Aleksandra Przegalińska on Elon Musk advocating for an AI ‘pause,’ citing risks to society. Musk was an early investor in OpenAI.
Perhaps the biggest fear of AI is the possibility that it could replace so many livelihoods. In March, investment bank Goldman Sachs predicted that AI could automate the equivalent of 300 million full-time jobs between the U.S. and Europe.
Przegalińska, whose research at Harvard University focuses on AI and the future of work, says developers should focus on how humans can collaborate with AI to increase productivity but not replace humans altogether.
Many things can go wrong if you decide to choose that pathway of full automation.
Aleksandra Przegalińska, AI expert
“But our jobs will change and some jobs will probably disappear because of artificial intelligence,” Przegalińska said. “And I do think that politicians have to look at that as well.”
In May 2023, AI was responsible for 3,900 job cuts in the U.S., according to data from Challenger, Gray & Christmas, Inc.
When it comes to regulating AI, the U.S. is not the one setting the global groundwork. This summer, the European Union passed a draft law known as the A.I. Act, legislation that is years in the making. But it’s just a start.
“I do regret a bit that this regulation happened this late,” Przegalińska said. “Many people from the AI field have been calling for regulation before ChatGPT and way before ChatGPT. We knew already that there would be some problems because some of these systems are just not explainable. They’re like black boxes; they’re very difficult to understand and yet we use them.”
Meanwhile, lawmakers on Capitol Hill have held several hearings about risks posed by artificial intelligence and ways to regulate its use. However, American efforts are considered to be in the early stages. Also, lawmakers have been criticized in the past for not understanding the technology they aim to regulate, like during Big Tech hearings in the past.
“There was a bit of a mismatch in terms of digital competencies,” Przegalińska said.
I do hope that this time around, the politicians will come prepared, that they will be better prepared for these types of discussions.”
Aleksandra Przegalińska, AI expert
How should AI be regulated to combat deepfakes and bad actors? Click here for more.
The Uncanny Valley
How easy is it to tell what is real and what is artificial? AI today has some serious quirks, like generating eight or nine fingers on one hand. But as technology advances, it’ll get more and more difficult to separate fact from fiction.
I have my own deepfake, and it’s so good that for me, it’s even sometimes hard to figure out whether it’s her speaking or myself. Really, that’s so uncanny.
Aleksandra Przegalińska, AI expert
In real life and in movies, those in robotics have pursued building robots that look and act human, coming close to crossing the uncanny valley.
“The uncanny valley is this concept that tells us that if something resembles a human being, but not fully, then we are scared of it,” Przegalińska said. “So our probably very deeply, inrooted biological response to something that looks like a human and is not a human and we know that, is to just have this eerie sensation that this is not something we should be interacting with.”
What are the psychological effects of crossing into the uncanny valley? Click here to watch.
Full interview time stamps:
0:00-2:22 Introduction 2:23-5:00 My Unconventional Path To AI Research 5:01-9:42 How The Terminator, Media Drive Our AI Fears 9:43-13:01 Sam Altman, AI Developers Spreading Fear 13:02-14:00 Elon Musk’s Big Regret? 14:01-18:55 How ChatGPT Changed Everything 18:56-25:01 Do Politicians Know Enough About AI To Regulate? 25:02-31:48 The Dangers Of The Uncanny Valley, Deepfakes 31:49-39:27 Will AI Cause Massive Unemployment? 39:28-43:49 Answering Most-Searched Questions About AI
AI can create deepfake child porn. State AGs want Congress to address that.
Every state attorney general in the country is asking Congress to investigate and pass new laws that address artificial intelligence’s ability to make child pornography. In a letter to congressional leaders, the attorneys general said AI is “creating a new frontier for abuse that makes such prosecution more difficult.”
The letter detailed exactly how the crimes are committed.
“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions,” the letter said. “This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.”
The letter is reaching a receptive audience in Congress.
“Child Sexual Abuse material is rampant, and vicious. And we need more protection against AI propagation and spread of it,” Sen. Richard Blumenthal D-Conn., told Straight Arrow News.
Sens. Blumenthal and Josh Hawley, R-Mo., said they will be introducing legislation that addresses this soon. The bill is expected to target Section 230 of the Communications Decency Act which creates a liability shield for online service providers, like Facebook and Twitter, for content users post on their sites.
“Artificial intelligence should have no, none, legal immunity like many social media outlets now have. AI can spread and propagate Child Sexual Abuse Material that does enduring a devastating harm to kids,” Blumenthal said.
“I would make it clear that they are liable for civil suits. And I think people ought to be able to get into court and sue them,” Hawley said.
The Senate is holding its inaugural AI forum Wednesday, Sept. 13, when a bipartisan group of lawmakers will meet with the top minds in AI to develop a plan for congressional action.
“We can’t put our heads in the sand like ostriches. If we don’t do anything AI is going to move forward without us and the dangers could be maximized and the opportunities could be minimized,” Sen. Chuck Schumer D-N.Y. said.
Congress is also working on legislation that will regulate how AI companies handle intellectual property and name, image and likeness.