This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first, sign up here.
If you follow drone news closely—and you’re forgiven if you don’t—you may have noticed over the last few months that the Federal Aviation Administration (FAA) has been quite busy. For decades, the agency had been a thorn in the side of drone evangelists, who wanted more freedom to fly drones in shared airspaces or dense neighborhoods. The FAA’s rules have made it cumbersome for futuristic ideas like drones delivering packages to work at scale.
Lately, that’s been changing. The agency recently granted Amazon’s Prime Air program approval to fly drones beyond the visual line of sight from its pilots in parts of Texas. The FAA has also granted similar waivers to hundreds of police departments around the country, which are now able to fly drones miles away, much to the ire of privacy advocates.
However, while the FAA doling out more waivers is notable, there’s a much bigger change coming in less than a month. It promises to be the most significant drone decision in decades, and one that will decide just how many drones we all can expect to see and hear buzzing above us in the US on a daily basis.
By September 16—if the FAA adheres to its deadline—the agency must issue a Notice of Proposed Rulemaking about whether drones can be flown beyond a visual line of sight. In other words, rather than issuing one-off waivers to police departments and delivery companies, it will propose a rule that applies to everyone using the airspace and aims to minimize the safety risk of drones flying into one another or falling and injuring people or property below.
The FAA was first directed to come up with a rule back in 2018, but it hasn’t delivered. The September 16 deadline was put in place by the most recent FAA Reauthorization Act, signed into law in May. The agency will have 16 months after releasing the proposed rule to issue a final one.
Who will craft such an important rule, you ask? There are 87 organizations on the committee. Half are either commercial operators like Amazon and FedEx, drone manufacturers like Skydio, or other tech interests like Airbus or T-Mobile. There are also a handful of privacy groups like the American Civil Liberties Union, as well as academic researchers.
It’s unclear where exactly the agency’s proposed rule will fall, but experts in the drone space told me that the FAA has grown much more accommodating of drones, and they expect this ruling to be reflective of that shift.
If the rule makes it easier for pilots to fly beyond their line of sight, nearly every type of drone pilot will benefit from fewer restrictions. Groups like search and rescue pilots could more easily use drones to find missing persons in the wilderness without an FAA waiver, which is hard to obtain quickly in an emergency situation.
But if more drones take to the skies with their pilots nowhere in sight, it will have massive implications. “The [proposed rule] will likely allow a broad swatch of operators to conduct wide-ranging drone flights beyond their visual line of sight,” says Jay Stanley, a senior policy analyst at the American Civil Liberties Union’s Speech, Privacy, and Technology Project. “That could open up the skies to a mass of delivery drones (from Amazon and UPS to local ‘burrito-copters’ and other deliveries), local government survey or code-enforcement flights, and a whole new swath of police surveillance operations.”
Read more about what’s coming next for drones from me here.
Now read the rest of The Algorithm
Deeper Learning
The US wants to use facial recognition to identify migrant children as they age
The US Department of Homeland Security (DHS) is looking into ways it might use facial recognition technology to track the identities of migrant children, “down to the infant,” as they age, according to John Boyd, assistant director of the department’s Office of Biometric Identity Management (OBIM), where a key part of his role is to research and develop future biometric identity services for the government. The previously unreported project is intended to improve how facial recognition algorithms track children over time.
Why this matters: Facial recognition technology (FRT) has traditionally not been applied to children, largely because training data sets of real children’s faces are few and far between, and consist of either low-quality images drawn from the internet or small sample sizes with little diversity. Such limitations reflect the significant sensitivities regarding privacy and consent when it comes to minors. A DHS program specifically trained on images of children, immigrants’ rights organizations and privacy advocates told MIT Technology Review, raises serious concern about whether children will be able to opt out of biometric data collection. Read more from Eileen Guo here.
Bits and Bytes
A new public database lists all the ways AI could go wrong
The AI Risk Repository documents over 700 potential risks advanced AI systems could pose. It’s the most comprehensive source yet of information about previously identified issues that could arise from the creation and deployment of these models. (MIT Technology Review)
Escaping Spotify’s algorithm
According to a 2022 report published by Distribution Strategy Group, at least 30% of songs streamed on Spotify are recommended by AI. By delivering what people seem to want, has Spotify killed the joy of music discovery? (MIT Technology Review)
How ‘Deepfake Elon Musk’ became the internet’s biggest scammer
An AI-powered version of Mr. Musk has appeared in thousands of inauthentic ads, contributing to billions in fraud. (The New York Times)
Google’s conversational assistant Gemini Live has launched
Google’s Gemini Live, which was teased back in May, is the company’s closest answer to OpenAI’s GPT-4o. The model can hold conversations in real time and you can interrupt it mid-sentence. Google finally rolled it out earlier this week. (Google)