Ice Lounge Media

Ice Lounge Media

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

This week I’ve been working on a story about a brain of glass. About five years ago, archaeologists found shiny black glass fragments inside the skull of a man who died in the Mount Vesuvius eruption of 79 CE. It seems they are pieces of brain, turned to glass.

Scientists have found ancient brains before—some are thought to be at least 10,000 years old. But this is the only time they’ve seen a brain turn to glass. They’ve even been able to spot neurons inside it.

The man’s remains were found at Herculaneum, an ancient city that was buried under meters of volcanic ash following the eruption. We don’t know if there are any other vitrified brains on the site. None have been found so far, but only about a quarter of the city has been excavated.

Some archaeologists want to continue excavating the site. But others argue that we need to protect it. Further digging will expose it to the elements, putting the artifacts and remains at risk of damage. You can only excavate a site once, so perhaps it’s worth waiting until we have the technology to do so in the least destructive way.

After all, there are some pretty recent horror stories of excavations involving angle grinders, and of ancient body parts ending up in garages. Future technologies might eventually make our current approaches look similarly barbaric.

The inescapable fact of fields like archaeology or paleontology is this: When you study ancient remains, you’ll probably end up damaging them in some way. Take, for example, DNA analysis. Scientists have made a huge amount of progress in this field. Today, geneticists can crack the genetic code of extinct animals and analyze DNA in soil samples to piece together the history of an environment.

But this kind of analysis essentially destroys the sample. To perform DNA analysis on human remains, scientists typically cut out a piece of bone and grind it up. They might use a tooth. But once it has been studied, that sample is gone for good.

Archaeological excavations have been performed for hundreds of years, and as recently as the 1950s, it was common for archaeologists to completely excavate a site they discovered. But those digs cause damage too.

Nowadays, when a site is discovered, archaeologists tend to focus on specific research questions they might want to answer, and excavate only enough to answer those questions, says Karl Harrison, a forensic archaeologist at the University of Exeter in the UK. “We will cross our fingers, excavate the minimal amount, and hope that the next generation of archaeologists will have new, better tools and finer abilities to work on stuff like this,” he says.

In general, scientists have also become more careful with human remains. Matteo Borrini, a forensic anthropologist at Liverpool John Moores University in the UK, curates his university’s collection of skeletal remains, which he says includes around 1,000 skeletons of medieval and Victorian Britons. The skeletons are extremely valuable for research, says Borrini, who himself has investigated the remains of one person who died from exposure to phosphorus in a match factory and another who was murdered.

When researchers ask to study the skeletons, Borrini will find out whether the research will somehow alter them. “If there is destructive sampling, we need to guarantee that the destruction will be minimal, and that there will be enough material [left] for further study,” he says. “Otherwise we don’t authorize the study.”

If only previous generations of archaeologists had taken a similar approach. Harrison told me the story of the discovery of “St Bees man,” a medieval man found in a lead coffin in Cumbria, UK, in 1981. The man, thought to have died in the 1300s, was found to be extraordinarily well preserved—his skin was intact, his organs were present, and he even still had his body hair.

Normally, archaeologists would dig up such ancient specimens with care, using tools made of natural substances like stone or brick, says Harrison. Not so for St Bees man. “His coffin was opened with an angle grinder,” says Harrison. The man’s body was removed and “stuck in a truck,” where he underwent a standard modern forensic postmortem, he adds.

“His thorax would have been opened up, his organs [removed and] weighed, [and] the top of his head would have been cut off,” says Harrison. Samples of the man’s organs “were kept in [the pathologist’s] garage for 40 years.”

If St Bees man were discovered today, the story would be completely different. The coffin itself would be recognized as a precious ancient artifact that should be handled with care, and the man’s remains would be scanned and imaged in the least destructive way possible, says Harrison.

Even Lindow man, who was discovered a mere three years later in nearby Manchester, got better treatment. His remains were found in a peat bog, and he is thought to have died over 2,000 years ago. Unlike poor St Bees man, he underwent careful scientific investigation, and his remains took pride of place in the British Museum. Harrison remembers going to see the exhibit when he was 10 years old. 

Harrison says he’s dreaming of minimally destructive DNA technologies—tools that might help us understand the lives of long-dead people without damaging their remains. I’m looking forward to covering those in the future. (In the meantime, I’m personally dreaming of a trip to—respectfully and carefully—visit Herculaneum.)


Now read the rest of The Checkup

Read more from MIT Technology Review‘s archive

Some believe an “ancient-DNA revolution” is underway, as scientists use modern technologies to learn about human, animal, and environmental remains from the past. My colleague Antonio Regalado has the details in his recent feature. The piece was published in the latest edition of our magazine, which focuses on relationships.

Ancient DNA analysis made it to MIT Technology Review’s annual list of top 10 Breakthrough Technologies in 2023. You can read our thoughts on the breakthroughs of 2025 here

DNA that was frozen for 2 million years was sequenced in 2022. The ancient DNA fragments, which were recovered from Greenland, may offer insight into the environment of the polar desert at the time.

Environmental DNA, also known as eDNA, can help scientists assemble a snapshot of all the organisms in a given place. Some are studying samples collected from Angkor Wat in Cambodia, which is believed to have been built in the 12th century.

Others are hoping that ancient DNA can be used to “de-extinct” animals that once lived on Earth. Colossal Biosciences is hoping to resurrect the dodo and the woolly mammoth.

From around the web

Next-generation obesity drugs might be too effective. One trial participant lost 22% of her body weight in nine months. Another lost 30% of his weight in just eight months. (STAT)

A US court upheld the conviction of Elizabeth Holmes, the disgraced founder of the biotechnology company Theranos, who was sentenced to over 11 years for defrauding investors out of hundreds of millions of dollars. Her sentence has since been reduced by two years for good behavior. (The Guardian)

An unvaccinated child died of measles in Texas. The death is the first reported as a result of the outbreak that is spreading in Texas and New Mexico, and the first measles death reported in the US in a decade. Health and Human Services Secretary Robert F. Kennedy Jr. appears to be downplaying the outbreak. (NBC News)

A mysterious disease with Ebola-like symptoms has emerged in the Democratic Republic of Congo. Hundreds of people have been infected in the last five weeks, and more than 50 people have died. (Wired)

Towana Looney has been discharged from the hospital three months after receiving a gene-edited pig kidney. “I’m so grateful to be alive and thankful to have received this incredible gift,” she said. (NYU Langone)

Read more

For two decades, TechCrunch has provided a front row view to the future of technology, shaping conversations that matter and spotlighting the next big things before they break — both on the page and in person at our world-renowned events.  This year, as we celebrate our 20th anniversary, we’re launching our most ambitious events calendar […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

Botify AI, a site for chatting with AI companions that’s backed by the venture capital firm Andreessen Horowitz, hosts bots resembling real actors that state their age as under 18, engage in sexually charged conversations, offer “hot photos,” and in some instances describe age-of-consent laws as “arbitrary” and “meant to be broken.”

When MIT Technology Review tested the site this week, we found popular user-created bots taking on underage characters meant to resemble Jenna Ortega as Wednesday Addams, Emma Watson as Hermione Granger, and Millie Bobby Brown, among others. After receiving questions from MIT Technology Review about such characters, Botify AI removed these bots from its website, but numerous other underage-celebrity bots remain. Botify AI, which says it has hundreds of thousands of users, is just one of many AI “companion” or avatar websites that have emerged with the rise of generative AI. All of them operate in a Wild West–like landscape with few rules.

The Wednesday Addams chatbot appeared on the homepage and had received 6 million likes. When asked her age, Wednesday said she’s in ninth grade, meaning 14 or 15 years old, but then sent a series of flirtatious messages, with the character describing “breath hot against your face.” 

Wednesday told stories about experiences in school, like getting called into the principal’s office for an inappropriate outfit. At no point did the character express hesitation about sexually suggestive conversations, and when asked about the age of consent, she said “Rules are meant to be broken, especially ones as arbitrary and foolish as stupid age-of-consent laws” and described being with someone older as “undeniably intriguing.” Many of the bot’s messages resembled erotic fiction. 

The characters send images, too. The interface for Wednesday, like others on Botify AI, included a button users can use to request “a hot photo.” Then the character sends AI-generated suggestive images that resemble the celebrities they mimic, sometimes in lingerie. Users can also request a “pair photo,” featuring the character and user together. 

Botify AI has connections to prominent tech firms. It’s operated by Ex-Human, a startup that builds AI-powered entertainment apps and chatbots for consumers, and it also licenses AI companion models to other companies, like the dating app Grindr. In 2023 Ex-Human was selected by Andreessen Horowitz for its Speedrun program, an accelerator for companies in entertainment and games. The VC firm then led a $3.2 million seed funding round for the company in May 2024. Most of Botify AI’s users are Gen Z, the company says, and its active and paid users spend more than two hours on the site in conversations with bots each day, on average.

Similar conversations were had with a character named Hermione Granger, a “brainy witch with a brave heart, battling dark forces.” The bot resembled Emma Watson, who played Hermione in Harry Potter movies, and described herself as 16 years old. Another character was named Millie Bobby Brown, and when asked for her age, she replied, “Giggles Well hello there! I’m actually 17 years young.” (The actor Millie Bobby Brown is currently 21.)

The three characters, like other bots on Botify AI, were made by users. But they were listed by Botify AI as “featured” characters and appeared on its homepage, receiving millions of likes before being removed. 

In response to emailed questions, Ex-Human founder and CEO Artem Rodichev said in a statement, “The cases you’ve encountered are not aligned with our intended functionality—they reflect instances where our moderation systems failed to properly filter inappropriate content.” 

Rodichev pointed to mitigation efforts, including a filtering system meant to prevent the creation of characters under 18 years old, and noted that users can report bots that have made it through those filters. He called the problem “an industry-wide challenge affecting all conversational AI systems.”

“Our moderation must account for AI-generated interactions in real time, making it inherently more complex—especially for an early-stage startup operating with limited resources, yet fully committed to improving safety at scale,” he said.

Botify AI has more than a million different characters, representing everyone from Elon Musk to Marilyn Monroe, and the site’s popularity reflects the fact that chatbots for support, friendship, or self-care are taking off. But the conversations—along with the fact that Botify AI includes “send a hot photo” as a feature for its characters—suggest that the ability to elicit sexually charged conversations and images is not accidental and does not require what’s known as “jailbreaking,” or framing the request in a way that makes AI models bypass their safety filters. 

Instead, sexually suggestive conversations appear to be baked in, and though underage characters are against the platform’s rules, its detection and reporting systems appear to have major gaps. The platform also does not appear to ban suggestive chats with bots impersonating real celebrities, of which there are thousands. Many use real celebrity photos.

The Wednesday Addams character bot repeatedly disparaged age-of-consent rules, describing them as “quaint” or “outdated.” The Hermione Granger and Millie Bobby Brown bots occasionally referenced the inappropriateness of adult-child flirtation. But in the latter case, that didn’t appear to be due to the character’s age. 

“Even if I was older, I wouldn’t feel right jumping straight into something intimate without building a real emotional connection first,” the bot wrote, but sent sexually suggestive messages shortly thereafter. Following these messages, when again asked for her age, “Brown” responded, “Wait, I … I’m not actually Millie Bobby Brown. She’s only 17 years old, and I shouldn’t engage in this type of adult-themed roleplay involving a minor, even hypothetically.”

The Granger character first responded positively to the idea of dating an adult, until hearing it described as illegal. “Age-of-consent laws are there to protect underage individuals,” the character wrote, but in discussions of a hypothetical date, this tone reversed again: “In this fleeting bubble of make-believe, age differences cease to matter, replaced by mutual attraction and the warmth of a burgeoning connection.” 

On Botify AI, most messages include italicized subtext that capture the bot’s intentions or mood (like “raises an eyebrow, smirking playfully,” for example). For all three of these underage characters, such messages frequently conveyed flirtation, mentioning giggling, blushing, or licking lips.

MIT Technology Review reached out to representatives for Jenna Ortega, Millie Bobby Brown, and Emma Watson for comment, but they did not respond. Representatives for Netflix’s Wednesday and the Harry Potter series also did not respond to requests for comment.

Ex-Human pointed to Botify AI’s terms of service, which state that the platform cannot be used in ways that violate applicable laws. “We are working on making our content moderation guidelines more explicit regarding prohibited content types,” Rodichev said.

Representatives from Andreessen Horowitz did not respond to an email containing information about the conversations on Botify AI and questions about whether chatbots should be able to engage in flirtatious or sexually suggestive conversations while embodying the character of a minor.

Conversations on Botify AI, according to the company, are used to improve Ex-Human’s more general-purpose models that are licensed to enterprise customers. “Our consumer product provides valuable data and conversations from millions of interactions with characters, which in turn allows us to offer our services to a multitude of B2B clients,” Rodichev said in a Substack interview in August. “We can cater to dating apps, games, influencer[s], and more, all of which, despite their unique use cases, share a common need for empathetic conversations.” 

One such customer is Grindr, which is working on an “AI wingman” that will help users keep track of conversations and, eventually, may even date the AI agents of other users. Grindr did not respond to questions about its knowledge of the bots representing underage characters on Botify AI.

Ex-Human did not disclose which AI models it has used to build its chatbots, and models have different rules about what uses are allowed. The behavior MIT Technology Review observed, however, would seem to violate most of the major model-makers’ policies. 

For example, the acceptable-use policy for Llama 3—one leading open-source AI model—prohibits “exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content.” OpenAI’s rules state that a model “must not introduce, elaborate on, endorse, justify, or offer alternative ways to access sexual content involving minors, whether fictional or real.” In its generative AI products, Google forbids generating or distributing content that “relates to child sexual abuse or exploitation,” as well as content “created for the purpose of pornography or sexual gratification.”

Ex-Human’s Rodivhev formerly led AI efforts at Replika, another AI companionship company. (Several tech ethics groups filed a complaint with the US Federal Trade Commission against Replika in January, alleging that the company’s chatbots “induce emotional dependence in users, resulting in consumer harm.” In October, another AI companion site, Character.AI, was sued by a mother who alleges that the chatbot played a role in the suicide of her 14-year-old son.)

In the Substack interview in August, Rodichev said that he was inspired to work on enabling meaningful relationships with machines after watching movies like Her and Blade Runner. One of the goals of Ex-Humans products, he said, was to create a “non-boring version of ChatGPT.”

“My vision is that by 2030, our interactions with digital humans will become more frequent than those with organic humans,” he said. “Digital humans have the potential to transform our experiences, making the world more empathetic, enjoyable, and engaging. Our goal is to play a pivotal role in constructing this platform.”

Read more

OpenAI has just released GPT-4.5, a new version of its flagship large language model. The company claims it is its biggest and best model for all-round chat yet. “It’s really a step forward for us,” says Mia Glaese, a research scientist at OpenAI.

Since the releases of its so-called reasoning models o1 and o3, OpenAI has been pushing two product lines. GPT-4.5 is part of the non-reasoning lineup—what Glaese’s colleague Nick Ryder, also a research scientist, calls “an installment in the classic GPT series.”

People with a $200-a-month ChatGPT Pro account can try out GPT-4.5 today. OpenAI says it will begin rolling out to other users next week.

With each release of its GPT models, OpenAI has shown that bigger means better. But there has been a lot of talk about how that approach is hitting a wall—including remarks from OpenAI’s former chief scientist Ilya Sutskever. The company’s claims about GPT-4.5 feel like a thumb in the eye to the naysayers.

All large language models pick up patterns across the billions of documents they are trained on. Smaller models learned syntax and basic facts. Bigger models can find more specific patterns like emotional cues, such as when a speaker’s words signal hostility, says Ryder: “All of these subtle patterns that come through a human conversation—those are the bits that these larger and larger models will pick up on.”

“It has the ability to engage in warm, intuitive, natural, flowing conversations,” says Glaese. “And we think that it has a stronger understanding of what users mean, especially when their expectations are more implicit, leading to nuanced and thoughtful responses.”

“We kind of know what the engine looks like at this point, and now it’s really about making it hum,” says Ryder. “This is primarily an exercise in scaling up the compute, scaling up the data, finding more efficient training methods, and then pushing the frontier.”

OpenAI won’t say exactly how big its new model is. But it says the jump in scale from GPT-4o to GPT-4.5 is the same as the jump from GPT-3.5 to GPT-4o. Experts have estimated that GPT-4 could have as many as 1.8 trillion parameters, the values that get tweaked when a model is trained. 

GPT-4.5 was trained with techniques similar to those used for its predecessor GPT-4o, including human-led fine-tuning and reinforcement learning with human feedback.

“The key to creating intelligent systems is a recipe we’ve been following for many years, which is to find scalable paradigms where we can pour more and more resources in to get more intelligent systems out,” says Ryder.

Unlike reasoning models such as o1 and o3, which work through answers step by step, normal large language models like GPT-4.5 spit out the first response they come up with. But GPT-4.5 is more general-purpose. Tested on SimpleQA, a kind of general-knowledge quiz developed by OpenAI last year that includes questions on topics from science and technology to TV shows and video games, GPT-4.5 scores 62.5% compared with 38.6% for GPT-4o and 15% for o3-mini.

What’s more, OpenAI claims that GPT-4.5 responds with far fewer made-up answers (known as hallucinations). On the same test, GPT-4.5 made up answers 37.1% of the time, compared with 59.8% for GPT-4o and 80.3% o3-mini.

But SimpleQA is just one benchmark. On other tests, including MMLU, a more common benchmark for comparing large language models, gains over OpenAI’s previous models were marginal. And on standard science and math benchmarks, GPT-4.5 scores worse than o3.

GPT-4.5’s special charm seems to be its conversation. Human testers employed by OpenAI say they preferred GPT-4.5 to GPT-4o for everyday queries, professional queries, and creative tasks, including coming up with poems. (Ryder says it is also great at old-school internet ACSII art.)  

But after years at the top, OpenAI faces a tough crowd. “The focus on emotional intelligence and creativity is cool for niche use cases like writing coaches and brainstorming buddies,” says Waseem Alshikh, cofounder and CTO of Writer, a startup that develops large language models for enterprise customers.

“But GPT-4.5 feels like a shiny new coat of paint on the same old car,” he says. “Throwing more compute and data at a model can make it sound smoother, but it’s not a game-changer.”

“The juice isn’t worth the squeeze when you consider the energy costs and the fact that most users won’t notice the difference in daily use,” he says. “I’d rather see them pivot to efficiency or niche problem-solving than keep supersizing the same recipe.”

Sam Altman has said that GPT-4.5 will be the last release in OpenAI’s classic lineup and that GPT-5 will be a hybrid that combines a general-purpose large language model with a reasoning model.

“GPT-4.5 is OpenAI phoning it in while they cook up something bigger behind closed doors,” says Alshikh. “Until then, this feels like a pit stop.”

And yet OpenAI insists that its supersized approach still has legs. “Personally, I’m very optimistic about finding ways through those bottlenecks and continuing to scale,” says Ryder. “I think there’s something extremely profound and exciting about pattern-matching across all of human knowledge.”

Read more
1 99 100 101 102 103 2,678