Ice Lounge Media

Ice Lounge Media

SoftBank is in talks to invest up to $25 billion in OpenAI as part of a broader partnership that could see the Japanese conglomerate spend more than $40 billion on AI initiatives with the Microsoft-backed startup, according to the Financial Times. The potential investment would make SoftBank OpenAI’s largest single backer, the report said, surpassing […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Meta says its controversial decision to put an end to its fact-checking program hasn’t impacted advertiser spend. On its Q4 2024 call, Meta CFO Susan Li assured investors that advertiser demand remains strong and the company’s commitment to brand safety remains unchanged, despite the new measures. Meanwhile, CEO Mark Zuckerberg noted that the community notes […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

LinkedIn, the social platform where people look for and talk about work, may be less visible in Microsoft’s earnings compared to the years when it was an independent company. But around earnings time, LinkedIn often reveals some figures that point to how it continues to grow.  On Wednesday, as Microsoft reported its Q2 numbers, the […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Tesla CEO Elon Musk said Wednesday his company will launch a paid ride-hailing robotaxi service in Austin, Texas using its own fleet vehicles this coming June — the latest in a long line of sky-high promises he has yet to meet about autonomy. Musk was otherwise unsurprisingly light on details. During an earnings call, Musk […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Mice with two dads have been created using CRISPR

What’s new: Mice with two fathers have been born—and have survived to adulthood—following a complex set of experiments by a team in China. The researchers used CRISPR to create the mice, using a novel approach to target genes that normally need to be inherited from both male and female parents. They hope to use the same approach to create primates with two dads.

Why it matters: Humans are off limits for now, but the work does help us better understand a strange biological phenomenon known as imprinting, which causes certain genes to be expressed differently depending on which parent they came from. Read the full story.

—Jessica Hamzelou

Three reasons Meta will struggle with community fact-checking

—Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, and ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed.

MIT Technology Review Narrated: Is this the end of animal testing?

Animal studies are notoriously bad at identifying human treatments. Around 95% of the drugs developed through animal research fail in people. But until recently there was no other option.

Now organs on chips may offer a truly viable alternative. They look remarkably prosaic: flexible polymer rectangles about the size of a thumb drive. In reality they’re triumphs of bioengineering, intricate constructions furrowed with tiny channels that are lined with living human tissues. And as they continue to be refined, they could solve one of the biggest problems in medicine today.

This is our latest story to be turned into a MIT Technology Review Narrated podcast, which 
we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 DeepSeek has AI investors spooked
They’re worried they’ve wasted their money after the Chinese startup proved that powerful models can be created on a shoestring. (NYT $)
+ Its success has also shed light on how little we know about AI’s power demands. (FT $)
+ DeepSeek’s rapid rise is great news for China’s AI strategy. (WP $)
+ How a top Chinese AI model overcame US sanctions. (MIT Technology Review)

2 OpenAI has accused DeepSeek of using its AI models to train R1 
Just hours after Sam Altman claimed it was invigorating to have a new competitor. (FT $)
+ DeepSeek has been telling some people that it’s made by Microsoft. (Fast Company $)
+ Italy is investigating how the firm handles personal data in relation to GDPR. (TechCrunch)

3 Alibaba claims its new AI model surpasses DeepSeek’s
That was fast. (WSJ $)+ Here’s what sets DeepSeek apart from its competition. (NBC News)

4 RFK Jr’s niece is trying to stop him being appointed the top US health official
She’s shared private emails in which he makes false covid and vaccine claims. (STAT)
+ His cousin has also denounced him as a predator. (NY Mag $)
+ A weaker vaccine policy will lead to the resurgence of dangerous diseases. (The Atlantic $)
+ Why childhood vaccines are a public health success story. (MIT Technology Review)

5 Donald Trump has threatened new chip sanctions
In a heavy-handed attempt to force manufacturers to relocate to the US. (WP $)

6 Women seeking fertility treatment in the US are being left in the dark
Clinics don’t publicly declare how many times egg retrieval has gone wrong. (Bloomberg $)
+ Inside the strange limbo facing millions of IVF embryos. (MIT Technology Review)

7 Spotify claims that streaming has made the world value music
I’m not convinced artists will agree. (The Verge)

8 Supersonic commercial flights could be staging a comeback
More than two decades after Concorde ceased operation. (New Scientist $)
+ How rerouting planes to produce fewer contrails could help cool the planet. (MIT Technology Review)

9 LinkedIn has booted AI-generated jobseekers off its platform
Their accounts were created by a company peddling AI agents. (404 Media)
+ How one developer fought back against AI crawler bots. (Ars Technica)

10 The future of food is bacteria and algae
Mmm, delicious. (Undark)
+ Would you eat dried microbes? This company hopes so. (MIT Technology Review)

Quote of the day

“I don’t have technology. I’ve never emailed or, what do you call it, Twittered.” 

—Actor Christopher Walken isn’t a fan of modern gadgetry, he tells the Wall Street Journal.

The big story

Deepfakes of your dead loved ones are a booming Chinese business

May 2024

Once a week, Sun Kai has a video call with his mother, and they discuss his day-to-day life. But Sun’s mother died five years ago, and the person he’s talking to isn’t actually a person, but a digital replica he made of her. 

There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them.

But some question whether interacting with AI replicas of the dead is truly a healthy way to process grief, and it’s not entirely clear what the legal and ethical implications of this technology may be. Read the full story.

—Zeyi Yang

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Happy Chinese New Year to all those who celebrate! 🐍
+ These robots do a passable job dancing to mark the celebration.
+ If you haven’t seen A Real Pain in the theater yet, why not?
+ Cool—archaeologists have uncovered an ancient Roman mask that may depict Medusa.

Read more

Earlier this month, Mark Zuckerberg announced that Meta will cut back on its content moderation efforts and eliminate fact-checking in the US in favor of the more “democratic” approach that X (formerly Twitter) calls Community Notes, rolling back protections that he claimed had been developed only in response to media and government pressure.

The move is raising alarm bells, and rightly so. Meta has left a trail of moderation controversies in its wake, from overmoderating images of breastfeeding women to undermoderating hate speech in Myanmar, contributing to the genocide of Rohingya Muslims. Meanwhile, ending professional fact-checking creates the potential for misinformation and hate to spread unchecked.

Enlisting volunteers is how moderation started on the Internet, long before social media giants realized that centralized efforts were necessary. And volunteer moderation can be successful, allowing for the development of bespoke regulations aligned with the needs of particular communities. But without significant commitment and oversight from Meta, such a system cannot contend with how much content is shared across the company’s platforms, and how fast. In fact, the jury is still out on how well it works at X, which is used by 21% of Americans (Meta’s are significantly more popular—Facebook alone is used by 70% of Americans, according to Pew).  

Community Notes, which started in 2021 as Birdwatch, is a community-driven moderation system on X that allows users who sign up for the program to add context to posts. Having regular users provide public fact-checking is relatively new, and so far results are mixed. For example, researchers have found that participants are more likely to challenge content they disagree with politically and that flagging content as false does not reduce engagement, but they have also found that the notes are typically accurate and can help reduce the spread of misleading posts

I’m a community moderator who researches community moderation. Here’s what I’ve learned about the limitations of relying on volunteers for moderation—and what Meta needs to do to succeed: 

1. The system will miss falsehoods and could amplify hateful content

There is a real risk under this style of moderation that only posts about things that a lot of people know about will get flagged in a timely manner—or at all. Consider how a post with a picture of a death cap mushroom and the caption “Tasty” might be handled under Community Notes–style moderation. If an expert in mycology doesn’t see the post, or sees it only after it’s been widely shared, it may not get flagged as “Poisonous, do not eat”—at least not until it’s too late. Topic areas that are more esoteric will be undermoderated. This could have serious impacts on both individuals (who may eat a poisonous mushroom) and society (if a falsehood spreads widely). 

Crucially, X’s Community Notes aren’t visible to readers when they are first added. A note becomes visible to the wider user base only when enough contributors agree that it is accurate by voting for it. And not all votes count. If a note is rated only by people who tend to agree with each other, it won’t show up. X does not make a note visible until there’s agreement from people who have disagreed on previous ratings. This is an attempt to reduce bias, but it’s not foolproof. It still relies on people’s opinions about a note and not on actual facts. Often what’s needed is expertise.

I moderate a community on Reddit called r/AskHistorians. It’s a public history site with over 2 million members and is very strictly moderated. We see people get facts wrong all the time. Sometimes these are straightforward errors. But sometimes there is hateful content that takes experts to recognize. One time a question containing a Holocaust-denial dog whistle escaped review for hours and ended up amassing hundreds of upvotes before it was caught by an expert on our team. Hundreds of people—probably with very different voting patterns and very different opinions on a lot of topics—not only missed the problematic nature of the content but chose to promote it through upvotes. This happens with answers to questions, too. People who aren’t experts in history will upvote outdated, truthy-sounding answers that aren’t actually correct. Conversely, they will downvote good answers if they reflect viewpoints that are tough to swallow. 

r/AskHistorians works because most of its moderators are expert historians. If Meta wants its Community Notes–style program to work, it should  make sure that the people with the knowledge to make assessments see the posts and that expertise is accounted for in voting, especially when there’s a misalignment between common understanding and expert knowledge. 

2. It won’t work without well-supported volunteers  

Meta’s paid content moderators review the worst of the worst—including gore, sexual abuse and exploitation, and violence. As a result, many have suffered severe trauma, leading to lawsuits and unionization efforts. When Meta cuts resources from its centralized moderation efforts, it will be increasingly up to unpaid volunteers to keep the platform safe. 

Community moderators don’t have an easy job. On top of exposure to horrific content, as identifiable members of their communities, they are also often subject to harassment and abuse—something we experience daily on r/AskHistorians. However, community moderators moderate only what they can handle. For example, while I routinely manage hate speech and violent language, as a moderator of a text-based community I am rarely exposed to violent imagery. Community moderators also work as a team. If I do get exposed to something I find upsetting or if someone is being abusive, my colleagues take over and provide emotional support. I also care deeply about the community I moderate. Care for community, supportive colleagues, and self-selection all help keep volunteer moderators’ morale high(ish). 

It’s unclear how Meta’s new moderation system will be structured. If volunteers choose what content they flag, will that replicate X’s problem, where partisanship affects which posts are flagged and how? It’s also unclear what kind of support the platform will provide. If volunteers are exposed to content they find upsetting, will Meta—the company that is currently being sued for damaging the mental health of its paid content moderators—provide social and psychological aid? To be successful, the company will need to ensure that volunteers have access to such resources and are able to choose the type of content they moderate (while also ensuring that this self-selection doesn’t unduly influence the notes).    

3. It can’t work without protections and guardrails 

Online communities can thrive when they are run by people who deeply care about them. However, volunteers can’t do it all on their own. Moderation isn’t just about making decisions on what’s “true” or “false.” It’s also about identifying and responding to other kinds of harmful content. Zuckerberg’s decision is coupled with other changes to its community standards that weaken rules around hateful content in particular. Community moderation is part of a broader ecosystem, and it becomes significantly harder to do it when that ecosystem gets poisoned by toxic content. 

I started moderating r/AskHistorians in 2020 as part of a research project to learn more about the behind-the-scenes experiences of volunteer moderators. While Reddit had started addressing some of the most extreme hate on its platform by occasionally banning entire communities, many communities promoting misogyny, racism, and all other forms of bigotry were permitted to thrive and grow. As a result, my early field notes are filled with examples of extreme hate speech, as well as harassment and abuse directed at moderators. It was hard to keep up with. 

But halfway through 2020, something happened. After a milquetoast statement about racism from CEO Steve Huffman, moderators on the site shut down their communities in protest. And to its credit, the platform listened. Reddit updated its community standards to explicitly prohibit hate speech and began to enforce the policy more actively. While hate is still an issue on Reddit, I see far less now than I did in 2020 and 2021. Community moderation needs robust support because volunteers can’t do it all on their own. It’s only one tool in the box. 

If Meta wants to ensure that its users are safe from scams, exploitation, and manipulation in addition to hate, it cannot rely solely on community fact-checking. But keeping the user base safe isn’t what this decision aims to do. It’s a political move to curry favor with the new administration. Meta could create the perfect community fact-checking program, but because this decision is coupled with weakening its wider moderation practices, things are going to get worse for its users rather than better. 

Sarah Gilbert is research director for the Citizens and Technology Lab at Cornell University.

Read more
1 86 87 88 89 90 2,623