Ice Lounge Media

Ice Lounge Media

When you walk around in a version of the video game Minecraft from the AI companies Decart and Etched, it feels a little off. Sure, you can move forward, cut down a tree, and lay down a dirt block, just like in the real thing. If you turn around, though, the dirt block you just placed may have morphed into a totally new environment. That doesn’t happen in Minecraft. But this new version is entirely AI-generated, so it’s prone to hallucinations. Not a single line of code was written.

For Decart and Etched, this demo is a proof of concept. They imagine that the technology could be used for real-time generation of videos or video games more generally. “Your screen can turn into a portal—into some imaginary world that doesn’t need to be coded, that can be changed on the fly. And that’s really what we’re trying to target here,” says Dean Leitersdorf, cofounder and CEO of Decart, which came out of stealth this week.

Their version of Minecraft is generated in real time, in a technique known as next-frame prediction. They did this by training their model, Oasis, on millions of hours of Minecraft gameplay and recordings of the corresponding actions a user would take in the game. The AI is able to sort out the physics, environments, and controls of Minecraft from this data alone. 

The companies acknowledge that their version of Minecraft is a little wonky. The resolution is quite low, you can only play for minutes at a time, and it’s prone to hallucinations like the one described above. But they believe that with innovations in chip design and further improvements, there’s no reason they can’t develop a high-fidelity version of Minecraft, or really any game. 

“What if you could say ‘Hey, add a flying unicorn here’? Literally, talk to the model. Or ‘Turn everything here into medieval ages,’ and then, boom, it’s all medieval ages. Or ‘Turn this into Star Wars,’ and it’s all Star Wars,” says Leitersdorf.

A major limitation right now is hardware. They relied on Nvidia cards for their current demo, but in the future, they plan to use Sohu, a new card that Etched has in development, which the firm claims will improve performance by a factor of 10. This gain would significantly cut down on the cost and energy needed to produce real-time interactive video. It would allow Decart and Etched to make a better version of their current demo, allowing the game to run longer, with fewer hallucinations, and at higher resolution. They say the new chip would also make it possible for more players to use the model at once.

“Custom chips for AI hold the potential to unlock significant performance gains and energy efficiency gains,” says Siddharth Garg, a professor of electrical and computer engineering at NYU Tandon, who is not associated with Etched or Decart.

Etched says that its gains come from designing their cards specifically for AI development. For example, the chip uses a single core, which it says makes it possible to handle complicated mathematical operations with more efficiency. The chip also focuses on inference (where an AI makes predictions) over training (where an AI learns from data).

“We are building something much more specialized than all of the chips out on the market today,” says Robert Wachen, cofounder and COO of Etched. They plan to run projects on the new card next year. Until the chip is deployed or its capabilities are verified, Etched’s claims are yet to be substantiated. And given the extent of AI specialization already in the top GPUs on the market, Garg is “very skeptical about a 10x improvement just from smarter or more specialized design.”

But the two companies have big ambitions. If the efficiency gains are close to what Etched claims, they believe, they will be able to generate real-time virtual doctors or tutors. “All of that is coming down the pipe, and it comes from having a better architecture and better hardware to power it. So that’s what we’re really trying to get people to realize with the proof of concept here,” says Wachen.

For the time being, you can try out the demo of their version of Minecraft here.

Read more

In late October, News Corp filed a lawsuit against Perplexity AI, a popular AI search engine. At first glance, this might seem unremarkable. After all, the lawsuit joins more than two dozen similar cases seeking credit, consent, or compensation for the use of data by AI developers. Yet this particular dispute is different, and it might be the most consequential of them all.

At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside.

At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive. 

If AI search breaks up this ecosystem, existing law is unlikely to help. Governments already believe that content is falling through cracks in the legal system, and they are learning to regulate the flow of value across the web in other ways. The AI industry should use this narrow window of opportunity to build a smarter content marketplace before governments fall back on interventions that are ineffective, benefit only a select few, or hamper the free flow of ideas across the web.

Copyright isn’t the answer to AI search disruption

News Corp argues that using its content to extract information for AI search amounts to copyright infringement, claiming that Perplexity AI “compete[s] for readers while simultaneously freeriding” on publishers.That sentiment is likely shared by the New York Times, which sent a cease-and-desist letter to Perplexity AI in mid-October.

In some respects, the case against AI search is stronger than other cases that involve AI training. In training, content has the biggest impact when it is unexceptional and repetitive; an AI model learns generalizable behaviors by observing recurring patterns in vast data sets, and the contribution of any single piece of content is limited. In search, content has the most impact when it is novel or distinctive, or when the creator is uniquely authoritative. By design, AI search aims to reproduce specific features from that underlying data, invoke the credentials of the original creator, and stand in place of the original content. 

Even so, News Corp faces an uphill battle to prove that Perplexity AI infringes copyright when it processes and summarizes information. Copyright doesn’t protect mere facts, or the creative, journalistic, and academic labor needed to produce them. US courts have historically favored tech defendants who use content for sufficiently transformative purposes, and this pattern seems likely to continue. And if News Corp were to succeed, the implications would extend far beyond Perplexity AI. Restricting the use of information-rich content for noncreative or nonexpressive purposes could limit access to abundant, diverse, and high-quality data, hindering wider efforts to improve the safety and reliability of AI systems. 

Governments are learning to regulate the distribution of value online

If existing law is unable to resolve these challenges, governments may look to new laws. Emboldened by recent disputes with traditional search and social media platforms, governments could pursue aggressive reforms modeled on the media bargaining codes enacted in Australia and Canada or proposed in California and the US Congress. These reforms compel designated platforms to pay certain media organizations for displaying their content, such as in news snippets or knowledge panels. The EU imposed similar obligations through copyright reform, while the UK has introduced broad competition powers that could be used to enforce bargaining. 

In short, governments have shown they are willing to regulate the flow of value between content producers and content aggregators, abandoning their traditional reluctance to interfere with the internet.

However, mandatory bargaining is a blunt solution for a complex problem. These reforms favor a narrow class of news organizations, operating on the assumption that platforms like Google and Meta exploit publishers. In practice, it’s unclear how much of their platform traffic is truly attributable to news, with estimates ranging from 2% to 35% of search queries and just 3% of social media feeds. At the same time, platforms offer significant benefit to publishers by amplifying their content, and there is little consensus about the fair apportionment of this two-way value. Controversially, the four bargaining codes regulate simply indexing or linking to news content, not just reproducing it. This threatens the “ability to link freely” that underpins the web. Moreover, bargaining rules focused on legacy media—just 1,400 publications in Canada, 1,500 in the EU, and 62 organizations in Australia—ignore countless everyday creators and users who contribute the posts, blogs, images, videos, podcasts, and comments that drive platform traffic.

Yet for all its pitfalls, mandatory bargaining may become an attractive response to AI search. For one thing, the case is stronger. Unlike traditional search—which indexes, links, and displays brief snippets from sources to help a user decide whether to click through—AI search could directly substitute generated summaries for the underlying source material, potentially draining traffic, eyeballs, and exposure from downstream websites. More than a third of Google sessions end without a click, and the proportion is likely to be significantly higher in AI search. AI search also simplifies the economic calculus: Since only a few sources contribute to each response, platforms—and arbitrators—can more accurately track how much specific creators drive engagement and revenue.  

Ultimately, the devil is in the details. Well-meaning but poorly designed mandatory bargaining rules might do little to fix the problem, protect only a select few, and potentially cripple the free exchange of information across the web. 

Industry has a narrow window to build a fairer reward system

However, the mere threat of intervention could have a bigger impact than actual reform. AI firms quietly recognize the risk that litigation will escalate into regulation. For example, Perplexity AI, OpenAI, and Google are already striking deals with publishers and content platforms, some covering AI training and others focusing on AI search. But like early bargaining laws, these agreements benefit only a handful of firms, some of which (such as Reddit) haven’t yet committed to sharing that revenue with their own creators. 

This policy of selective appeasement is untenable. It neglects the vast majority of creators online, who cannot readily opt out of AI search and who do not have the bargaining power of a legacy publisher. It takes the urgency out of reform by mollifying the loudest critics. It legitimizes a few AI firms through confidential and intricate commercial deals, making it difficult for new entrants to obtain equal terms or equal indemnity and potentially entrenching a new wave of search monopolists. In the long term, it could create perverse incentives for AI firms to favor low-cost and low-quality sources over high-quality but more expensive news or content, fostering a culture of uncritical information consumption in the process.

Instead, the AI industry should invest in frameworks that reward creators of all kinds for sharing valuable content. From YouTube to TikTok to X, tech platforms have proven they can administer novel rewards for distributed creators in complex content marketplaces. Indeed, fairer monetization of everyday content is a core objective of the “web3” movement celebrated by venture capitalists. The same reasoning carries over to AI search. If queries yield lucrative engagement but users don’t click through to sources, commercial AI search platforms should find ways to attribute that value to creators and share it back at scale.

Of course, it’s possible that our digital economy was broken from the start. Subsistence on trickle-down ad revenue may be unsustainable, and the attention economy has inflicted real harm to privacy, integrity, and democracy online. Supporting quality news and fresh content may require other forms of investment or incentives. 

But we shouldn’t give up on the prospect of a fairer digital economy. If anything, while AI search makes content bargaining more urgent, it also makes it more feasible than ever before. AI pioneers should seize this opportunity to lay the foundations for a smart, equitable, and scalable reward system. If they don’t, governments now have the frameworks—and confidence—to impose their own vision of shared value.

Benjamin Brooks is a fellow at the Berkman Klein Center at Harvard scrutinizing the regulatory and legislative response to AI. He previously led public policy for Stability AI, a developer of open models for image, language, audio, and video generation. His views do not necessarily represent those of any affiliated organization, past or present. 

Read more

ChatGPT can now search the web for up-to-date answers to a user’s queries, OpenAI announced today. 

Until now, ChatGPT was mostly restricted to generating answers from its training data, which is current up to October 2023 for GPT-4o, and had limited web search capabilities. Searches about generalized topics will still draw on this information from the model itself, but now ChatGPT will automatically search the web in response to queries about recent information such as sports, stocks, or news of the day, and can deliver rich multi-media results. Users can also manually trigger a web search, but for the most part, the chatbot will make its own decision about when an answer would benefit from information taken from the web, says Adam Fry, OpenAI’s product lead for search.

“Our goal is to make ChatGPT the smartest assistant, and now we’re really enhancing its capabilities in terms of what it has access to from the web,” Fry tells MIT Technology Review. The feature is available today for the chatbot’s paying users. 

ChatGPT triggers a web search when the user asks about local restaurants in this example

While ChatGPT search, as it is known, is initially available to paying customers, OpenAI intends to make it available for free later, even when people are logged out. The company also plans to combine search with its voice features and Canvas, its interactive platform for coding and writing, although these capabilities will not be available in today’s initial launch.

The company unveiled a standalone prototype of web search in July. Those capabilities are now built directly into the chatbot. OpenAI says it has “brought the best of the SearchGPT experience into ChatGPT.” 

OpenAI is the latest tech company to debut an AI-powered search assistant, challenging similar tools from competitors such as Google, Microsoft, and startup Perplexity. Meta, too, is reportedly developing its own AI search engine. As with Perplexity’s interface, users of ChatGPT search can interact with the chatbot in natural language, and it will offer an AI-generated answer with sources and links to further reading. In contrast, Google’s AI Overviews offer a short AI-generated summary at the top of the website, as well as a traditional list of indexed links. 

These new tools could eventually challenge Google’s 90% market share in online search. AI search is a very important way to draw more users, says Chirag Shah, a professor at the University of Washington, who specializes in online search. But he says it is unlikely to chip away at Google’s search dominance. Microsoft’s high-profile attempt with Bing barely made a dent in the market, Shah says. 

Instead, OpenAI is trying to create a new market for more powerful and interactive AI agents, which can take complex actions in the real world, Shah says. 

The new search function in ChatGPT is a step toward these agents. 

It can also deliver highly contextualized responses that take advantage of chat histories, allowing users to go deeper in a search. Currently, ChatGPT search is able to recall conversation histories and continue the conversation with questions on the same topic. 

ChatGPT itself can also remember things about users that it can use later —sometimes it does this automatically, or you can ask it to remember something. Those “long-term” memories affect how it responds to chats. Search doesn’t have this yet—a new web search starts from scratch— but it should get this capability in the “next couple of quarters,” says Fry. When it does, OpenAI says it will allow it to deliver far more personalized results based on what it knows.

“Those might be persistent memories, like ‘I’m a vegetarian,’ or it might be contextual, like ‘I’m going to New York in the next few days,’” says Fry. “If you say ‘I’m going to New York in four days,’ it can remember that fact and the nuance of that point,” he adds. 

To help develop ChatGPT’s web search, OpenAI says it leveraged its partnerships with news organizations such as Reuters, the Atlantic, Le Monde, the Financial Times, Axel Springer, Condé Nast, and Time. However, its results include information not only from these publishers, but any other source online that does not actively block its search crawler.   

It’s a positive development that ChatGPT will now be able to retrieve information from these reputable online sources and generate answers based on them, says Suzan Verberne, a professor of natural-language processing at Leiden University, who has studied information retrieval. It also allows users to ask follow-up questions.

But despite the enhanced ability to search the web and cross-check sources, the tool is not immune from the persistent tendency of AI language models to make things up or get it wrong. When MIT Technology Review tested the new search function and asked it for vacation destination ideas, ChatGPT suggested “luxury European destinations” such as Japan, Dubai, the Caribbean islands, Bali, the Seychelles, and Thailand. It offered as a source an article from the Times, a British newspaper, which listed these locations as well as those in Europe as luxury holiday options.

“Especially when you ask about untrue facts or events that never happened, the engine might still try to formulate a plausible response that is not necessarily correct,” says Verberne. There is also a risk that misinformation might seep into ChatGPT’s answers from the internet if the company has not filtered its sources well enough, she adds. 

Another risk is that the current push to access the web through AI search will disrupt the internet’s digital economy, argues Benjamin Brooks, a fellow at Harvard University’s Berkman Klein Center, who previously led public policy for Stability AI, in an op-ed published by MIT Technology Review today.

“By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and ‘eyeballs’ they need to survive,” Brooks writes.

Read more

Inspired by an unprecedented opportunity, the life sciences sector has gone all in on AI. For example, in 2023, Pfizer introduced an internal generative AI platform expected to deliver $750 million to $1 billion in value. And Moderna partnered with OpenAI in April 2024, scaling its AI efforts to deploy ChatGPT Enterprise, embedding the tool’s capabilities across business functions from legal to research.

In drug development, German pharmaceutical company Merck KGaA has partnered with several AI companies for drug discovery and development. And Exscientia, a pioneer in using AI in drug discovery, is taking more steps toward integrating generative AI drug design with robotic lab automation in collaboration with Amazon Web Services (AWS).

Given rising competition, higher customer expectations, and growing regulatory challenges, these investments are crucial. But to maximize their value, leaders must carefully consider how to balance the key factors of scope, scale, speed, and human-AI collaboration.

The early promise of connecting data

The common refrain from data leaders across all industries—but specifically from those within data-rich life sciences organizations—is “I have vast amounts of data all over my organization, but the people who need it can’t find it.” says Dan Sheeran, general manager of health care and life sciences for AWS. And in a complex healthcare ecosystem, data can come from multiple sources including hospitals, pharmacies, insurers, and patients.

“Addressing this challenge,” says Sheeran, “means applying metadata to all existing data and then creating tools to find it, mimicking the ease of a search engine. Until generative AI came along, though, creating that metadata was extremely time consuming.”

ZS’s global head of the digital and technology practice, Mahmood Majeed notes that his teams regularly work on connected data programs, because “connecting data to enable connected decisions across the enterprise gives you the ability to create differentiated experiences.”

Majeed points to Sanofi’s well-publicized example of connecting data with its analytics app, plai, which streamlines research and automates time-consuming data tasks. With this investment, Sanofi reports reducing research processes from weeks to hours and the potential to improve target identification in therapeutic areas like immunology, oncology, or neurology by 20% to 30%.

Achieving the payoff of personalization

Connected data also allows companies to focus on personalized last-mile experiences. This involves tailoring interactions with healthcare providers and understanding patients’ individual motivations, needs, and behaviors.

Early efforts around personalization have relied on “next best action” or “next best engagement” models to do this. These traditional machine learning (ML) models suggest the most appropriate information for field teams to share with healthcare providers, based on predetermined guidelines.

When compared with generative AI models, more traditional machine learning models can be inflexible, unable to adapt to individual provider needs, and they often struggle to connect with other data sources that could provide meaningful context. Therefore, the insights can be helpful but limited.  

Sheeran notes that companies have a real opportunity to improve their ability to gain access to connected data for better decision-making processes, “Because the technology is generative, it can create context based on signals. How does this healthcare provider like to receive information? What insights can we draw about the questions they’re asking? Can their professional history or past prescribing behavior help us provide a more contextualized answer? This is exactly what generative AI is great for.”

Beyond this, pharmaceutical companies spend millions of dollars annually to customize marketing materials. They must ensure the content is translated, tailored to the audience and consistent with regulations for each location they offer products and services. A process that usually takes weeks to develop individual assets has become a perfect use case for generative copy and imagery. With generative AI, the process is reduced to from weeks to minutes and creates competitive advantage with lower costs per asset, Sheeran says.

Accelerating drug discovery with AI, one step at a time

Perhaps the greatest hope for AI in life sciences is its ability to generate insights and intellectual property using biology-specific foundation models. Sheeran says, “our customers have seen the potential for very, very large models to greatly accelerate certain discrete steps in the drug discovery and development processes.” He continues, “Now we have a much broader range of models available, and an even larger set of models coming that tackle other discrete steps.”

By Sheeran’s count, there are approximately six major categories of biology-specific models, each containing five to 25 models under development or already available from universities and commercial organizations.

The intellectual property generated by biology-specific models is a significant consideration, supported by services such as Amazon Bedrock, which ensures customers retain control over their data, with transparency and safeguards to prevent unauthorized retention and misuse.

Finding differentiation in life sciences with scope, scale, and speed

Organizations can differentiate with scope, scale, and speed, while determining how AI can best augment human ingenuity and judgment. “Technology has become so easy to access. It’s omnipresent. What that means is that it’s no longer a differentiator on its own,” says Majeed. He suggests that life sciences leaders consider:

Scope: Have we zeroed in on the right problem? By clearly articulating the problem relative to the few critical things that could drive advantage, organizations can identify technology and business collaborators and set standards for measuring success and driving tangible results.

Scale: What happens when we implement a technology solution on a large scale? The highest-priority AI solutions should be the ones with the most potential for results.Scale determines whether an AI initiative will have a broader, more widespread impact on a business, which provides the window for a greater return on investment, says Majeed.

By thinking through the implications of scale from the beginning, organizations can be clear on the magnitude of change they expect and how bold they need to be to achieve it. The boldest commitment to scale is when companies go all in on AI, as Sanofi is doing, setting goals to transform the entire value chain and setting the tone from the very top.

Speed: Are we set up to quickly learn and correct course? Organizations that can rapidly learn from their data and AI experiments, adjust based on those learnings, and continuously iterate are the ones that will see the most success. Majeed emphasizes, “Don’t underestimate this component; it’s where most of the work happens. A good partner will set you up for quick wins, keeping your teams learning and maintaining momentum.”

Sheeran adds, “ZS has become a trusted partner for AWS because our customers trust that they have the right domain expertise. A company like ZS has the ability to focus on the right uses of AI because they’re in the field and on the ground with medical professionals giving them the ability to constantly stay ahead of the curve by exploring the best ways to improve their current workflows.”

Human-AI collaboration at the heart

Despite the allure of generative AI, the human element is the ultimate determinant of how it’s used. In certain cases, traditional technologies outperform it, with less risk, so understanding what it’s good for is key. By cultivating broad technology and AI fluency throughout the organization, leaders can teach their people to find the most powerful combinations of human-AI collaboration for technology solutions that work. After all, as Majeed says, “it’s all about people—whether it’s customers, patients, or our own employees’ and users’ experiences.”

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Housing is an election issue. But the US sucks at it.

Ahead of abortion access, ahead of immigration, and way ahead of climate change, US voters under 30 are most concerned about one issue: housing affordability. And it’s not just young voters who say soaring rents and eye-watering home sale prices are among their top worries. For the first time in recent memory, the cost of housing could be a major factor in the presidential election.  

It’s not hard to see why. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. 

Permitting delays and strict zoning rules create huge obstacles to building more and faster—as do other widely recognized issues, like the political power of NIMBY activists across the country and an ongoing shortage of skilled workers. But there is also another, less talked-about problem: We’re not very efficient at building, and we seem somehow to be getting worse. Read the full story.

—David Rotman

Inside a fusion energy facility

—Casey Crownhart

On an overcast day in early October, I picked up a rental car and drove to Devens, Massachusetts, to visit a hole in the ground.

Commonwealth Fusion Systems has raised over $2 billion in funding since it spun out of MIT in 2018, all in service of building the first commercial fusion reactor. The plan is to have it operating by 2026.

I visited the company’s site recently to check in on progress. Things are starting to come together and, looking around the site, I found it becoming easier to imagine a future that could actually include fusion energy. But there’s still a lot of work left to do. Read the full story.

This story is from The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.

MIT Technology Review Narrated: How gamification took over the world

Instead of liberating us from drudgery and maximizing our potential, gamification has turned out to be just another tool for coercion, distraction, and control. Why did we fall for it?

This is our latest story to be turned into a MIT Technology Review Narrated podcast. In partnership with News Over Audio, we’ll be making a selection of our stories available, each one read by a professional voice actor. You’ll be able to listen to them on the go or download them to listen to offline.

We’re publishing a new story each week on Spotify and Apple Podcasts, including some taken from our most recent print magazine.

Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Bird flu has been found in a pig in the US for the first time 
The USDA says it’s not cause for panic. But it’s certainly cause for concern. (Reuters)
Why virologists are getting increasingly nervous about bird flu. (MIT Technology Review)
 
2 Elon Musk has turned X into a political weapon 
This is what $44 billion bought him: the ability to flood the zone with falsehoods during an election. (The Atlantic $)
X’s crowdsourced fact-checking program is falling woefully short. (WP $)
And it’s not just X. YouTube is full of election conspiracy content too. (NYT $)
+ Spare a thought for the election officials who have to navigate this mess. (NPR)
 
3 Europe’s big tech hawks are nervously eyeing the US election
Biden was an ally in their efforts to crack down. Either of his potential successors look like a less sure bet. (Wired $)
Attendees regularly fail to disclose their links to big tech at EU events. (The Guardian)
 
4 The AI boom is being powered by concrete
It’s a major ingredient for data centers and the power plants being built to serve them—and a climate disaster. (IEEE Spectrum)
How electricity could help tackle a surprising climate villain. (MIT Technology Review)
 
5 What makes human brains so special? 🧠
Much of the answer is still a mystery—but researchers are uncovering more and more promising leads. (Nature)
+ Tech that measures our brainwaves is 100 years old. How will we be using it 100 years from now? (MIT Technology Review)
 
6 Boston Dynamics’ humanoid robot is getting much more capable
If its latest video, in which it autonomously picks up and moves car parts, is anything to go by. (TechCrunch)
A skeptic’s guide to humanoid-robot videos. (MIT Technology Review)
 
7 Alexa desperately needs a revamp
The voice assistant was launched 10 years ago, and it’s been disappointing us ever since. (The Verge
 
8 We’re sick of algorithms recommending us stuff
Lots of people are keen to turn back to guidance from other humans. (New Yorker $)
If you’re one of them, I have bad news: AI is going to make the problem much worse. (Fortune $)
 
9 Russia fined Google $20,000,000,000,000,000,000,000,000,000,000,000
That’s more money than exists on Earth but sure, don’t let that stop you. (The Register)
 
10 What is going on with Mark Zuckerberg recently 
He’s using clothes to rebrand himself and… it’s kinda working?! (Slate)

Quote of the day

“It’s what happens when you let a bunch of grifters take over.”

—A Trumpworld source explains to Wired why Donald Trump’s ground campaign in Michigan is so chaotic. 

 The big story

A day in the life of a Chinese robotaxi driver

worldcoin orb

WORLDCOIN

July 2022

When Liu Yang started his current job, he found it hard to go back to driving his own car: “I instinctively went for the passenger seat. Or when I was driving, I would expect the car to brake by itself,” says the 33-year-old Beijing native, who joined the Chinese tech giant Baidu in January 2021 as a robotaxi driver.

Liu is one of the hundreds of safety operators employed by Baidu, “driving” five days a week in Shougang Park. But despite having only worked for the company for 19 months, he already has to think about his next career move, as his job will likely be eliminated within a few years. Read the full story.

—Zeyi Yang

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ Happy Halloween! Check out some of the best spine-chilling classic novels
+ If scary movies are more your jam, I’ve still got you covered.
+ These photo montages of music fans outside concerts are incredible. 
+ Love that this guy went from being terrified of rollercoasters to designing them.
+ You’ll probably never sort your life out. And that’s OK.

Read more
1 74 75 76 77 78 2,518