Ice Lounge Media

Ice Lounge Media

Recorded on November 19, 2024

What’s Next for Mixed Reality: Glasses, Goggles, and More.

Speakers: Mat Honan, Editor in Chief, and James O’Donnell, AI hardware reporter.

We are barreling toward the next big consumer device category: smart glasses. After years of trying, augmented-reality specs are at last a thing. Facebook recently showed off its Orion smart glasses, and Snap has introduced its second-generation pair. The Pentagon is also working on mixed-reality headsets that can be used on the battlefield. Hear from MIT Technology Review editor in chief Mat Honan and AI hardware reporter James O’Donnell for a conversation about where our AR experiences are heading.

Related Coverage

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How the largest gathering of US police chiefs is talking about AI

—James O’Donnell

The International Association of Chiefs of Police bills itself as the largest gathering of its type in the United States. Leaders from many of the country’s 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards. 

I went along last month to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. From the expo hall, talks, and interviews, it seems they’re already enthusiastically heeding the call. Read the full story.

This story is from The Algorithm, our weekly AI newsletter. Sign up to receive it in your inbox every Monday.

Roundtables: What’s Next for Mixed Reality: Glasses, Goggles, and More

After years of trying, augmented-reality specs are at last a thing. 

If you want to learn more about where AR experiences are heading, join our editor-in-chief Mat Honan and AI hardware reporter James O’Donnell for a Roundtables conversation streamed online at 2pm ET/11am PT today. It’s for subscribers only but good news: this week our subscriptions are half price. Don’t miss out! 

Read more about mixed reality:

+ We interviewed Palmer Luckey, founder of Oculus, about his plans to bring mixed-reality goggles to soldiers. Here’s what he had to say.

+  The coolest thing about smart glasses is not the AR. It’s the AI.

+ Snap has launched new augmented-reality Spectacles. Here’s what we made of them

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 The FBI is investigating threats texted to Latino and LGBTQ+ people 
They claim recipients will be deported or sent to a re-education camp. (WP $)
+  ICE can already sidestep sanctuary city laws through data-sharing centers. (Wired $)
Trump has confirmed he plans to use the military for mass deportations. (NYT $)

2 Chinese tech groups are building AI teams in Silicon Valley 
Despite Washington’s best efforts to stymie their work. (FT $)
How a US ban on investing in Chinese startups could escalate under Trump. (Wired $)

3 How Apple will cope with looming tariffs 
The fact CEO Tim Cook already has a relationship with Trump will surely help. (Bloomberg $)

4 Two undersea cables in the Baltic Sea have been disrupted 
It looks like Russia is trying to interfere with global undersea infrastructure. (CNN)
A Russian spy ship had to be escorted out of the Irish Sea last weekend too. (The Guardian)

5 An AI tool could help solve math problems humans are stuck on
It’s a good example of how blending human and machine intelligence can produce positive results. (New Scientist $)
+ This AI system makes human tutors better at teaching children math. (MIT Technology Review)

6 Robots still struggle to match warehouse workers on some tasks
For all the advances robots have made, picking things up and moving them around remains a big challenge. (NYT $)
+ AI is poised to automate today’s most mundane manual warehouse task. (MIT Technology Review)

7 Perplexity’s AI search engine can now buy stuff for you
How long until Google follows? (The Verge)

8 Dozens of states are begging Congress to pass the kids online safety act
It has currently stalled in the House of Representatives due to censorship concerns. (The Verge)
Roblox is adding more controls to let parents set daily usage limits, block access to certain game genres, and more. (WSJ $)
+ Why child safety bills are popping up all over the US

9 The US Patent and Trademark Office banned staff from using generative AI
It cited security concerns plus the fact some tools exhibit “bias, unpredictability, and malicious behavior.” (Wired $)

10 NASA might have killed life on Mars 😬
A new paper suggests that adding water to Martian soil might have been a bad move. (Quartz $)
+  The ISS has been leaking air for 5 years, and engineers still can’t agree why. (Ars Technica)

Quote of the day

“We are bleeding cash as an industry.” 

—Thomas Laffont, co-founder of investment firm Coatue Management, says venture capital firms are struggling to make money amid a boom in AI investments, the Wall Street Journal reports.

 The big story

How mobile money supercharged Kenya’s sports betting addiction

BRIAN OTIENO

April 2022

Mobile money has mostly been hugely beneficial for Kenyans. But it has also turbo-charged the country’s sports betting sector.

Experts and public figures across the African continent are sounding the alarm over the growth of the sector increasingly loudly. It’s produced tales of riches, but it has also broken families, consumed college tuitions, and even driven some to suicide. Read the full story.

—Jonathan W. Rosen

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or tweet ’em at me.)

+ I just learned a pertinent word for this season: abscission
+ Only some people will get this… but if you’re one of them, you’ll enjoy it. 
+ Why Late of the Pier were one of the most exciting UK bands of the 2000s.
+ Whether you call them crisps or chips, they’re goddamn delicious.

Read more

This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.

It can be tricky for reporters to get past certain doors, and the door to the International Association of Chiefs of Police conference is one that’s almost perpetually shut to the media. Thus, I was pleasantly surprised when I was able to attend for a day in Boston last month. 

It bills itself as the largest gathering of police chiefs in the United States, where leaders from many of the country’s 18,000 police departments and even some from abroad convene for product demos, discussions, parties, and awards. 

I went along to see how artificial intelligence was being discussed, and the message to police chiefs seemed crystal clear: If your department is slow to adopt AI, fix that now. The future of policing will rely on it in all its forms.

In the event’s expo hall, the vendors (of which there were more than 600) offered a glimpse into the ballooning industry of police-tech suppliers. Some had little to do with AI—booths showcased body armor, rifles, and prototypes of police-branded Cybertrucks, and others displayed new types of gloves promising to protect officers from needles during searches. But one needed only to look to where the largest crowds gathered to understand that AI was the major draw. 

The hype focused on three uses of AI in policing. The flashiest was virtual reality, exemplified by the booth from V-Armed, which sells VR systems for officer training. On the expo floor, V-Armed built an arena complete with VR goggles, cameras, and sensors, not unlike the one the company recently installed at the headquarters of the Los Angeles Police Department. Attendees could don goggles and go through training exercises on responding to active shooter situations. Many competitors of V-Armed were also at the expo, selling systems they said were cheaper, more effective, or simpler to maintain. 

The pitch on VR training is that in the long run, it can be cheaper and more engaging to use than training with actors or in a classroom. “If you’re enjoying what you’re doing, you’re more focused and you remember more than when looking at a PDF and nodding your head,” V-Armed CEO Ezra Kraus told me. 

The effectiveness of VR training systems has yet to be fully studied, and they can’t completely replicate the nuanced interactions police have in the real world. AI is not yet great at the soft skills required for interactions with the public. At a different company’s booth, I tried out a VR system focused on deescalation training, in which officers were tasked with calming down an AI character in distress. It suffered from lag and was generally quite awkward—the character’s answers felt overly scripted and programmatic. 

The second focus was on the changing way police departments are collecting and interpreting data. Rather than buying a gunshot detection tool from one company and a license plate reader or drone from another, police departments are increasingly using expanding suites of sensors, cameras, and so on from a handful of leading companies that promise to integrate the data collected and make it useful. 

Police chiefs attended classes on how to build these systems, like one taught by Microsoft and the NYPD about the Domain Awareness System, a web of license plate readers, cameras, and other data sources used to track and monitor crime in New York City. Crowds gathered at massive, high-tech booths from Axon and Flock, both sponsors of the conference. Flock sells a suite of cameras, license plate readers, and drones, offering AI to analyze the data coming in and trigger alerts. These sorts of tools have come in for heavy criticism from civil liberties groups, which see them as an assault on privacy that does little to help the public. 

Finally, as in other industries, AI is also coming for the drudgery of administrative tasks and reporting. Many companies at the expo, including Axon, offer generative AI products to help police officers write their reports. Axon’s offering, called Draft One, ingests footage from body cameras, transcribes it, and creates a first draft of a report for officers. 

“We’ve got this thing on an officer’s body, and it’s recording all sorts of great stuff about the incident,” Bryan Wheeler, a senior vice president at Axon, told me at the expo. “Can we use it to give the officer a head start?”

On the surface, it’s a writing task well suited for AI, which can quickly summarize information and write in a formulaic way. It could also save lots of time officers currently spend on writing reports. But given that AI is prone to “hallucination,” there’s an unavoidable truth: Even if officers are the final authors of their reports, departments adopting these sorts of tools risk injecting errors into some of the most critical documents in the justice system. 

“Police reports are sometimes the only memorialized account of an incident,” wrote Andrew Ferguson, a professor of law at American University, in July in the first law review article about the serious challenges posed by police reports written with AI. “Because criminal cases can take months or years to get to trial, the accuracy of these reports are critically important.” Whether certain details were included or left out can affect the outcomes of everything from bail amounts to verdicts. 

By showing an officer a generated version of a police report, the tools also expose officers to details from their body camera recordings before they complete their report, a document intended to capture the officer’s memory of the incident. That poses a problem. 

“The police certainly would never show video to a bystander eyewitness before they ask the eyewitness about what took place, as that would just be investigatory malpractice,” says Jay Stanley, a senior policy analyst with the ACLU Speech, Privacy, and Technology Project, who will soon publish work on the subject. 

A spokesperson for Axon says this concern “isn’t reflective of how the tool is intended to work,” and that Draft One has robust features to make sure officers read the reports closely, add their own information, and edit the reports for accuracy before submitting them.

My biggest takeaway from the conference was simply that the way US police are adopting AI is inherently chaotic. There is no one agency governing how they use the technology, and the roughly 18,000 police departments in the United States—the precise figure is not even known—have remarkably high levels of autonomy to decide which AI tools they’ll buy and deploy. The police-tech companies that serve them will build the tools police departments find attractive, and it’s unclear if anyone will draw proper boundaries for ethics, privacy, and accuracy. 

That will only be made more apparent in an upcoming Trump administration. In a policing agenda released last year during his campaign, Trump encouraged more aggressive tactics like “stop and frisk,” deeper cooperation with immigration agencies, and increased liability protection for officers accused of wrongdoing. The Biden administration is now reportedly attempting to lock in some of its proposed policing reforms before January. 

Without federal regulation on how police departments can and cannot use AI, the lines will be drawn by departments and police-tech companies themselves.

“Ultimately, these are for-profit companies, and their customers are law enforcement,” says Stanley. “They do what their customers want, in the absence of some very large countervailing threat to their business model.”


Now read the rest of The Algorithm

Deeper Learning

The AI lab waging a guerrilla war over exploitative AI

When generative AI tools landed on the scene, artists were immediately concerned, seeing them as a new kind of theft. Computer security researcher Ben Zhao jumped into action in response, and his lab at the University of Chicago started building tools like Nightshade and Glaze to help artists keep their work from being scraped up by AI models. My colleague Melissa Heikkilä spent time with Zhao and his team to look at the ongoing effort to make these tools strong enough to stop AI’s relentless hunger for more images, art, and data to train on.  

Why this matters: The current paradigm in AI is to build bigger and bigger models, and these require vast data sets to train on. Tech companies argue that anything on the public internet is fair game, while artists demand compensation or the right to refuse. Settling this fight in the courts or through regulation could take years, so tools like Nightshade and Glaze are what artists have for now. If the tools disrupt AI companies’ efforts to make better models, that could push them to the negotiating table to bargain over licensing and fair compensation. But it’s a big “if.” Read more from Melissa Heikkilä.

Bits and Bytes

Tech elites are lobbying Elon Musk for jobs in Trump’s administration

Elon Musk is the tech leader who most has Trump’s ear. As such, he’s reportedly the conduit through which AI and tech insiders are pushing to have an influence in the incoming administration. (The New York Times)

OpenAI is getting closer to launching an AI agent to automate your tasks

AI agents—models that can do tasks for you on your behalf—are all the rage. OpenAI is reportedly closer to releasing one, news that comes a few weeks after Anthropic announced its own. (Bloomberg)

How this grassroots effort could make AI voices more diverse

A massive volunteer-led effort to collect training data in more languages, from people of more ages and genders, could help make the next generation of voice AI more inclusive and less exploitative. (MIT Technology Review

Google DeepMind has a new way to look inside an AI’s “mind”

Autoencoders let us peer into the black box of artificial intelligence. They could help us create AI that is better understood and more easily controlled. (MIT Technology Review)

Musk has expanded his legal assault on OpenAI to target Microsoft

Musk has expanded his federal lawsuit against OpenAI, which alleges that the company has abandoned its nonprofit roots and obligations. He’s now going after Microsoft too, accusing it of antitrust violations in its work with OpenAI. (The Washington Post)

Read more
1 3 4 5 6 7 2,474