Meta’s Oversight Board rules ban on Islamic term harms free speech
After a detailed review, Meta’s independent Oversight Board has advised ending the widespread ban on the Arabic word “shaheed” on Tuesday, March 26. This term, translating to “Islamic martyr” but also meaning “to bear witness” or “testify” in Arabic, was found to overly restrict content from millions of users.
The board noted that Meta’s approach disproportionately limits free expression, with “shaheed” leading to more content removal than any other term across its platforms.
Criticism of censoring the word grew amid the intensifying conflict between Israel and Hamas in October, with civil rights groups accusing the social media giant of censoring support for Palestinians.
Meta argued that extremists use “shaheed” to glorify terrorism, but it didn’t consider the word’s various meanings that don’t glorify martyrdom.
Helle Thorning-Schmidt, Oversight Board co-chair and former prime minister of Denmark, criticized the policy.
“Meta has been operating under the assumption that censorship can and will improve safety, but the evidence suggests that censorship can marginalize whole populations while not improving safety at all,” Thorning-Schmidt said.
Meta has promised to review the board’s feedback and will respond within 60 days to accept or deny the policy recommendation.
Instagram limits political content as US presidential election nears
Instagram has changed its rules regarding the presence of political content. The default setting is now limiting posts, reels and accounts that are deemed to be political in nature for users, according to the company.
The new limits on political content come just months before the 2024 presidential election, and some users are criticizing Instagram for the move. However, the social media company is far from the first platform to try and deviate from political content.
Instagram users will still see political posts from accounts they choose to follow, and users have the option to opt out of the limitations on political content. However, they will have to go through a series of settings changes to turn political content back on.
Posts or reels believed to be political in nature applies to anything that discusses elections, campaigns, laws, or social issues like immigration or abortion, according to Instagram’s Help Center page.
Instagram said that the reason behind the decision to limit political posts is to create a “great experience for everyone.” However, while users will still see posts from any accounts they follow, the platform won’t be recommending political content from any accounts that the individual is not using.
Instagram will also refrain from showing users random reels or recommended posts as well. The same control on political content will also roll out on Facebook at later date, according to The Wall Street Journal.
Facebook has a history of trying other means to keep politics out of news feeds. The company has faced criticism over the years for promoting and demoting certain political views and content on its platform. A Facebook spokesperson told the Journal that users want to see less politics overall.
According to the spokesperson, Facebook “tested various approaches” and “ultimately implemented changes that reduces politics.”
Political Facebook posts reportedly account for “less than 3% of total content views in users’ newsfeeds.”
According to Reuters, TikTok also has stipulations when it comes to posting political videos. Users are not allowed to fundraise for a politician or political cause. They are also not allowed to promote videos that are political in nature and touch on sensitive social issues, like immigration and abortion, according to the company.
According to CBS News, any sort of “content controls” social media companies wish to implement on its users are completely up to them. However, a U.S. Supreme Court case set to be decided on later this year could change that, if the majority of justices rule that such controls actually violate a user’s First Amendment rights.
NYC sues social media owners, alleges Big Tech made platforms addictive
In January, New York City became the first major U.S. city to declare social media an “environmental toxin.” On Wednesday, Feb. 14, Mayor Eric Adams announced that New York City has filed a lawsuit against five social media companies, alleging Facebook, Instagram, Google, YouTube and Snapchat knowingly made their platforms addictive for teens.
The lawsuit seeks compensatory damages from the companies and aims to hold Big Tech more accountable.
Adams claims New York City spends $100 million each year on mental health services and programs for adolescents. According to ABC News, the city is seeking to have some of that repaid in the lawsuit.
“NYC Plaintiffs have incurred and will continue to incur costs related to addressing the youth mental health crisis that Defendants created, caused, contributed to, and/or maintained,” the lawsuit said. “NYC Plaintiffs request relief relating to the damage or injury they have suffered, including actual and compensatory damages in an amount to be determined at trial.”
“Over the past decade, we have seen how addictive and overwhelming the online world can be, exposing children to content they are not ready for, disrupting the educational process, and seriously damaging their self-esteem,” Adams said during a press conference Wednesday.
In addition to the lawsuit, New York City officials are also calling for increased mental health care, pointing to data that shows rates of “suicidal ideation” have increased among adolescents. City officials are also requesting more research on the impact social media is having on the mental health of teens.
Recently, social media companies have faced increased scrutiny from state and national politicians. Several social media CEOs were questioned and scolded by some U.S. lawmakers during public testimony. Meta CEO Mark Zuckerberg was pushed to apologize by Sen. Josh Hawley, R-Mo., during the hearing.
“Would you like to apologize to the victims who have been harmed by your product?” Hawley said. “Would you like to apologize for what you’ve done to these good people?”
“I’m sorry for everything you have been through,” Zuckerberg said as he faced the families in the audience. “No one should have to go through the things that your families have suffered.”
Zuckerberg did, however, contend that a decline in mental health among teens is not directly related to his product.
In response to the New York City lawsuit, a Meta spokesperson told Reuters the company wanted teens to have “safe, age-appropriate experiences online.” A TikTok spokesperson also responded to ABC News’ report on the lawsuit, saying the company will “continue to work to keep the community safe by addressing industry-wide challenges.”
Google also issued a statement defending the company against the lawsuit.
“We’ve built services and products to give young people age-appropriate activities, experiences, and parents robust controls,” Jose Castaneda, a Google spokesperson, told Reuters. “The allegations in this complaint are simply not true.”
Snapchat also released a statement to ABC News.
“While we will always have more work to do, we feel good about the role Snapchat plays in helping close friends feel connected, happy, and prepared as they face the many challenges of adolescence,” said Ashley Adams, a Snapchat spokesperson.
While New York City looks to take legal action against social media companies, a federal judge in Ohio granted tech industry group NetChoice, which represents Facebook and TikTok among others, a legal win.
U.S. District Judge Algenon Marbley ruled that an Ohio law requiring people under the age of 16 to get parental permission to use social media is unconstitutional, according to Reason Magazine.
“Foreclosing minors under the age of 16 from accessing all content on websites that the act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media’s harm to children,” Marbley wrote.
Marbley’s temporary injunction is another blow to an increasing trend nationwide of requiring age verification to use social media apps. A federal judge ruled in September that an age restriction in Arkansas violated the First Amendment, blocking the law.
NY officials propose restricting social media algorithms for kids
Gov. Kathy Hochul, D-N.Y., is throwing her support behind legislation that limits the power of social media algorithms for its youngest users. The proposed legislation would also require age verification for anyone under 18.
On Oct. 11, 2023, Hochul announced the state would pursue legislation that would limit the power of Big Tech algorithms and how they connect kids to content they’ve liked in the past. Lawmakers believe the law would protect kids from potentially harmful content.
“They’ll give more parents more tools to keep their kids safe, limit social media’s outreach, and also loosen the grip that these algorithms have on the way our kids think and act,” Hochul said.
According to The Wall Street Journal, the legislation is part of a “broader spending plan” by Hochul. The Journal also said that the proposal is likely to pass with state lawmakers required to take action by March 31.
The proposed law in New York is another critical juncture in the fight to rein in the powers of Big Tech. Forty-one attorneys general nationwide have filed lawsuits against Meta, and more cities and states are enacting laws to combat what critics contend are the harmful impacts of social media on the health of kids and teens.
The Wall Street Journal interviewed New York Attorney General Letitia James, D, who said that she believes the legislation, if passed, will survive a legal challenge. James said it will not falter under legal scrutiny like laws in Ohio and Arkansas, which ban social media for kids.
The laws banning social media for kids in Ohio and Arkansas are unable to be enforced because of legal challenges from NetChoice, a trade group that represents technology companies including Meta and TikTok. A district judge granted NetChoice a temporary injunction after it filed a lawsuit against all three states. Utah’s law doesn’t go into place until March 1, 2024.
The reason James believes the New York’s proposed law will be successful is because it differs in scope from the states that previously enacted age restrictions. Instead of targeting all of the content, New York’s law would target a “delivery mechanism.” The law would not regulate content broadly.
The state senator who sponsors the bill, Andrew Gounardes, D-N.Y., spoke with the Journal.
“If you want to follow the Taylor Swift fan page, that’s great,” said New York Sen. Andrew Gounardes, the bill’s sponsor.“What we don’t want is where you click on one thing and in 15 minutes be shown self-harm videos.”
New York is not the only state attempting to change laws regarding social media usage. The Wall Street Journal reported 140 bills in at least 30 states are on the books regarding media literacy, according to the National Conference of State Legislatures.
In June, Politico reported that Connecticut required online platforms to perform children’s safety assessments. The assessments are designed to limit kids interactions with potentially dangerous people on social media.
Social media protections for kids have also been prominently featured in national politics recently. Congress members accused companies like Meta of hurting American youth through algorithms that target teens by enticing them with content.
“I know you don’t mean it to be so, but you have blood on your hands,” Sen. Lindsey Graham, R-S.C. said. “You have a product that’s killing people.”
However, Meta CEO Mark Zuckerberg argues that algorithms are taken out of context, telling The New York Times that his company uses them to identify and remove dangerous content.
Zuckerberg has also questioned the link between mental health issues in teens and social media usage. The Meta CEO said during testimony in front of U.S. senators that he did not see a “causal” link.
However, many lawmakers and activists contend that there is a link between social media and a rise in suicides, depression and mental health disorders among teens. In May, The New York Times reported that the U.S. surgeon general warned of the negative effects social media usage can have on anxiety and depression among kids.
Hochul is not the first public official in her state to take action on the reportedly harmful effects of social media use. In January, New York City Mayor Eric Adams, D, declared a public health emergency regarding social media usage among teens and kids. In response, New York City became the first major city in the U.S. to label social media an “environmental toxin.” Adams said he would reveal more details about his plan of action at a later date.
Why does Gen Z hate Facebook? Social network turns 20 with shrinking teen use
The world’s most popular social network will turn the big 2-0 on Feb. 4. Before Facebook made teens log out in droves, it was one of the most coveted tickets for college students. Here’s a look at why Gen Z has such disdain for the site that inspired the film, “The Social Network.”
When Mark Zuckerberg first launched thefacebook in 2004 as a directory of students at Harvard and it was an instant success. It soon expanded its reach to other Ivy League schools and eventually to most colleges and universities in the U.S. and Canada. This was mostly before it got wise to drop “the” in its name.
By 2006, two and a half years after its debut, Facebook became available to anyone aged 13 and older with a valid email address. At the time, Facebook earned roughly $30 million per year in revenue, a relatively small social media fish in a pond dominated by MySpace.
The company would go on to purchase Instagram and WhatsApp, become a public company, and later change the parent company’s name to Meta, aligning with its newfound focus on the metaverse to leverage their purchase of virtual reality headset maker Oculus.
But how could a social network, created for college students, get the reputation for being generally uncool and for baby boomers?
It doesn’t appear to be a demographic issue. The biggest cohort is men between the ages of 25 and 34, which would fall within the millennial category. True boomers make up around 10% of its user base, but as any regular Facebook user can tell you, boomers tend to make themselves known on the platform.
Facebook may have 3 billion monthly active users, but it struggles to remain relevant with the youth. Still, Generation Z is expected to outnumber baby boomers on Facebook soon, if they haven’t already.
Ten years ago, 71% of teens between the ages of 13 and 17 said they used Facebook regularly, according to Pew Research. That number dwindled to 33% in 2023.
Teens, for their part, are flocking to short-form video apps like YouTube, TikTok, Snapchat and Instagram.
For Meta’s part, the comapany owns Instagram, which is the third-most-used social network in the world. A substantial portion of its users are in the 18 to 34 demographic, even if many of the videos are just reposts from TikTok.
Teens leaving Facebook might not be a bad thing for the company as it’s facing heated criticism for its harmful effects on teens. In recent years, the company stopped efforts to attract teens and is focusing on bringing in young adults to the social network.
Meta, facing heat from all sides, announces safety measures for minors
A bipartisan group of state attorneys general filed a joint lawsuit against Meta, alleging that Facebook’s parent company knowingly used addictive features in its apps, negatively impacting children’s health. Lawmakers, often in disagreement, united together to demand answers from Meta’s leadership on its impact on minors.
“Look, we have a tremendous amount of evidence and information that’s been developed that shows that Meta knowingly has designed its products in a way to maximize its ad revenue by addicting young teenagers onto its products,” District of Columbia Attorney General Brian Schwalb said.
“They hid from this committee and from all of Congress evidence of the harms that they knew was credible,” Connecticut Sen. Richard Blumenthal said.
“They are deliberately misleading parents about what’s on their platform,” Missouri Sen. Josh Hawley said. “They are deliberately misleading parents about the safety of their children online.”
Now, Meta announces plans to expand safety measures for children and teens on its social media platforms. The goal is to make it harder for young users to come across sensitive content.
The company will implement restrictive settings on the accounts of teens and children, preventing users from searching “sensitive topics” and prompting teens to update their privacy settings.
In a blog post, Meta said Facebook and Instagram will hide search results for content related to suicide, self-harm, eating disorders, and nudity. Teens can still make posts on these subjects but won’t see them in their feed or stories, even if shared by someone they follow.
Meta aims to automatically place all teens under the most restrictive content control setting. These changes follow a whistleblower’s testimony to a Senate panel in November, stating that Meta knew harmful content was present on its platforms and company executives were taking no action.
“As a parent, I took the work personally,” Arturo Bejar, a former Meta employee, said. “By the time I left in 2015, I thought the work was going in the right direction. A few years later, my 14-year-old daughter joined Instagram. She and her friends began having awful experiences, including repeated unwanted sexual advances and harassment. She reported the incidents to the company, and it did nothing.”
Meta states the new update should be complete in a couple of weeks, just in time for CEO Mark Zuckerberg’s child safety testimony on Capitol Hill.
Study finding TikTok content suppressed due to Chinese interests raises questions
According to a recent study from Network Contagion Research Institute and Rutgers, TikTok may be suppressing content that doesn’t align with Chinese interests. The study compared the volume of posts on controversial topics on Instagram to the amount on TikTok, concluding that the ratio of comparisons show certain topics are nearly off limits on TikTok.
However, the Cato Institute is highlighting what it calls “limitations” to the study, and is offering alternative explanations for the lack of sensitive topics on the platform.
The study focuses on ratios, comparing the number of posts on Instagram in several categories to the amount on TikTok. However, there are about twice as many posts with pop culture hashtags on Instagram than on TikTok.
The study points out Instagram has more users, so there will be more posts.
When looking at posts focusing on U.S. politics, the ratio remained the same. For every two Instagram posts related to U.S. politics, there would be one on TikTok.
Many topics stayed in the two-to-one range, but when investigating hashtags of more controversial categories, the ratio widened.
For example, for hashtags used in support of Ukraine, there were eight as many posts on Instagram than on TikTok. Support for Ukraine is not a Chinese interest, as China backs Russia.
Other hashtags opposite to Chinese interests saw an even greater difference, #FreeUyghurs was 58 to one and #Amazing Taiwan was used over 1 million times on Instagram compared to 1,000 on TikTok. For #HongKongProtest, the ratio landed at 206 to one.
On the other hand, hashtags friendly to Chinese interests saw the reverse. TikTok saw more than 600 posts for every one on Instagram for #StandWithKashmir.
“It is challenging to imagine that activity of such magnitude could occur on a platform organically, and without the knowledge and consent of the platform itself,” the study concluded. “We assess a strong possibility that content on TikTok is either amplified or suppressed based on its alignment with the interests of the Chinese government.”
TikTok called the study flawed based on its methodology — the same conclusion the Cato Institute reached.
Cato said TikTok users versus Instagram’s have different interests given the age difference.
About 25% of TikTok users are aged 10-19. Roughly 8% of Instagram users are teenagers.
When looking at the apps’ timelines, Instagram dates back to 2010, whereas TikTok launched in 2017.
In the seven years between, public discourse over certain issues spiked at different times. There were 37 times as many posts on Instagram related to the controversy between China and Tibet, however, there was more activism over the issue when TikTok was not available.
Whether it is purposefully done by the app or because of users’ personal preferences, TikTok does have significantly less content tagged on hashtags that would undermine the Chinese government.
But now, one of the tools used by algorithm critics to track hashtags’ popularity has been restricted by TikTok. A spokesperson said it has been “misused to draw inaccurate conclusions.”
Investigation into new Meta smart glasses brings privacy concerns
The “next-generation Ray Ban Meta smart glasses” secretly captured hundreds of photos of individuals in parks, on trains, and hiking trails, without their knowledge. These covertly taken photos are sparking significant privacy concerns.
In the experiment, Chen explored how the new glasses are raising privacy concerns, especially now that they are integrated with livestream and artificial intelligence technology.
“Starting in the U.S., you’re going to get this state-of-the-art AI that you can interact with hands-free wherever you go,” Meta CEO Mark Zuckerberg said when he unveiled the new glasses.
Zuckerberg also posted a video to Instagram showing how the smart glasses can help translate signs and pick out a pair of pants to match an outfit.
The glasses feature a small LED light that shines from the right frame to indicate that the glasses are recording. When the glasses take a photo, a flash goes off as well. There is also “tamper-detection technology” to prevent a user from covering the LED light with tape.
“As I shot 200 photos and videos with the glasses in public, no one looked at the LED light or confronted me about it,” Chen wrote. “And why would they? It would be rude to comment on a stranger’s glasses, let alone stare at them.”
Meta’s collaboration with Ray-Ban is just one example of tech giants tapping into new products that shift what consumers do with their devices. Recent changes shifted technology to a more personal experience, now often interactive with the help of AI.
According to Chen, someone could unknowingly be part of that experience too, if they fail to see the LED light shining from the rim of a stranger’s glasses.
“Sleek, lightweight and satisfyingly hip, the Meta glasses blend effortlessly into the quotidian,” Chen wrote. “No one — not even my editor, who was aware I was writing this column — could tell them apart from ordinary glasses, and everyone was blissfully unaware of being photographed.”
House Republicans formally authorize Biden impeachment inquiry: The Morning Rundown, Dec. 14, 2023
House Republicans have formalized their impeachment inquiry into President Biden. And, Meta’s latest attempt at fact checking social media posts is facing criticism. These stories and more highlight The Morning Rundown for Thursday, Dec. 14.
House Republicans vote to authorize impeachment inquiry into President Biden
House Republicans called for Wednesday’s, Dec. 13, vote after the White House challenged subpoenas sent by the committees, calling them illegitimate without a formal House vote to authorize the investigation. Republican leadership, including Speaker Mike Johnson, R-La., said voting in favor of the resolution does not mean the president will be impeached.
“We’re not going to prejudge the outcome of this,” Johnson said. “We can’t because, again, it’s not a political calculation. We’re following the law and we are the rule of law team. And I’m going to hold to that as my commitment.”
Hours before the vote, the president’s son, Hunter Biden, defied a subpoena for a closed-door deposition. Instead, he spoke to reporters outside the U.S. Capitol, where he repeated his offer to testify in a public hearing while criticizing the House GOP and defending his father.
“Let me state as clearly as I can,” Hunter Biden said, “my father was not financially involved in any of my business, not as a practicing lawyer, not as a board member of Burisma, not in my partnership with a Chinese private businessman, not in my investments at home nor abroad, and certainly not as an artist.”
House Oversight Committee Chairman James Comer, R-Ky., said he plans to hold Hunter Biden in contempt of Congress. Straight Arrow News’ political correspondent Ray Bogan has a full report on Hunter Biden’s comments here.
Senate passes $886 billion defense with pay raise for troops
The bill also establishes a new training program in Taiwan, and it includes the creation of a special inspector general for Ukraine to address concerns on whether American dollars are being spent as intended.
The bill does keep in place the Pentagon’s rule that allows for travel reimbursement for service members going out of state for an abortion, a policy that House Republicans were looking to remove. The bill passed the Senate by a vote of 87-13. Now, the bill heads to the House.
Jake Sullivan heads to Israel to meet with Netanyahu
According to National Security Council spokesman John Kirby, over the next two days, Sullivan will discuss with Israeli officials “efforts to be more surgical and more precise” in their strikes in Gaza “to reduce harm to civilians.”
Sullivan is also expected to discuss the return of hostages taken by Hamas. Eight Americans are among the more than 100 hostages still being held by the terrorist group in Gaza. On Wednesday, Dec. 13, President Biden met with the families of the American hostages. Kirby said the president was “moved by their stories, by the love they feel, by the hope that they still harbor.”
Biden promised the family members they would be kept informed every step of the way as the U.S. works to get their loved ones home.
Judge orders a pause in Trump’s election interference case
Special Counsel Jack Smith argued against an all-out pause trying to keep the case on track for a March 4 trial. The order handed down by U.S. District Judge Tanya Chutkan could delay the start of the trial and is now at the mercy of the appeal process timeline.
After the ruling, the appeals court set deadlines for briefs to be filed in the next couple of weeks in an effort to expedite the process. Still, prosecutors worry the case could be paused for some time, including until after the next presidential election.
Trump’s legal team called the decision a “a big win for President Trump and our rule of law” and accused Special Counsel Jack Smith of using the trial as a strategic way to interfere in the 2024 presidential election in support of President Joe Biden.
Trump has been brought on federal charges for allegedly trying to overturn the 2020 election. He has pleaded not guilty.
Meta fact-checking feature sparking censorship concerns
A new fact-checking feature that Meta quietly launched on Instagram has raised concerns over censorship. A meta spokesperson said the company is allowing users to control the algorithm, adding that the feature allows users to customize whether they see more or less fact-checked content in their feed.
The spokesperson also said that the feature has been rolled out after listening to what users want, “a greater ability to decide what they see on our apps.”
Still, the feature is facing criticism. In the last few days, some pro-Palestinian accounts have criticized the function and said that Instagram is trying to censor their content. The fact-checking feature was launched on Facebook in May, but it is unclear when it was set in motion on Instagram, though a Meta blog post about the feature was updated on Tuesday, Dec. 12.
FCC approves proposal to ban cable early termination fees
For cable subscribers thinking of cutting the cord, the Federal Communications Commission just approved a proposal to eliminate early termination fees. The FCC on Wednesday, Dec. 14, voted 3-2 on partisan lines to ban cable and satellite companies from charging what the agency’s chairwoman described as “junk fees.”
According to findings by consumer reports in 2019, cable companies could make an estimated $28 billion a year on fees. The FCC will now solicit public feedback on its proposal before taking a final vote on the rules.
Musk takes on Disney for Facebook ads as New Mexico says Meta exploits kids
Elon Musk wants Disney CEO Bob Iger fired immediately. Musk’s call came after posts on X revealed that Disney is still advertising on Facebook and Instagram, even after New Mexico filed a lawsuit against Meta for alleged child sexual exploitation.
Disney cut advertising ties with Musk’s platform after he was accused of boosting an antisemitic conspiracy theory.
In a series of X posts, Musk said Iger should be let go from Disney immediately.
“Bob Eiger thinks it’s cool to advertise next to child exploitation material. Real stand up guy,” Musk said.
“Don’t advertise,” Musk said on stage. “If someone is trying to blackmail me with advertising, blackmail me with money, go f— yourself. Go f— yourself. Is that clear? I hope it is. Hi Bob, if you’re in the audience.”
Musk was referring to Iger, who had spoke earlier at the summit.
On Tuesday, Dec. 5, New Mexico Attorney General Raúl Torrez filed a lawsuit against Meta for knowingly exposing children to the dangers of sexual exploitation and mental health harm.
The civil suit, filed against Meta and CEO Mark Zuckerberg, claims Instagram and Facebook are a “breeding ground for predators who target children for human trafficking,” and allows adults to groom minors into participating in pornographic content.
Torrez’s agency conducted a thorough undercover investigation of Meta, spanning several months. The investigation involved the creation of decoy accounts on the platforms, portraying children aged 14 years and younger.
The investigation found these accounts were directed to a “stream of egregious, sexually explicit images – even when the child expressed no interest in the content.” It also found that Meta recommended these fake accounts to “join unmoderated Facebook groups devoted to facilitating commercial sex.”
Investigators also set up a decoy account that portrayed a mother who wanted to solicit her 13-year-old daughter for sex and “create a professional page to allow her daughter to share revenue from advertising.”
“Mr. Zuckerberg and other Meta executives are aware of the serious harm their products can pose to young users, and yet they have failed to make sufficient changes to their platforms that would prevent the sexual exploitation of children,” Torrez said in a news release.
The lawsuit claims Meta failed to protect children from sexual abuse, online solicitation and human trafficking because the company “profits from its exposure of young users to harmful material” by “monetizing in the form of targeted advertising, the data that Meta gathers about its young users and their usage.”
The lawsuit does not seek to hold Meta liable as the publisher of this material but it does state the platform is responsible for “unlawful conduct in designing and maintaining its products in such a manner so as to cause known harms to its users.”
Torrez says the state will be investigating other companies. Zuckerberg is set to testify in front of the Senate Judiciary Committee on child safety in January.