This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.
There’s been a quiet shift in the abortion fight in the US. Since the reversal of Roe v. Wade by the Supreme Court last June, laws that make most abortions illegal have passed in 13 states. Efforts to restrict abortion care have, so far, focused mostly on criminalizing medical providers. But increasingly, the battleground is moving online.
Texas is trying to limit access to abortion pills by cracking down on internet service providers and credit card processing companies. These tactics reflect the reality that, post-Roe, the internet is a critical channel for people seeking information about abortion or trying to buy pills to terminate a pregnancy—especially in states where they can no longer access these things in physical pharmacies or medical centers.
Texas has long been a laboratory for anti-abortion political tactics, and on March 15, a US District Judge heard arguments in a case that’s seeking to reverse the FDA approval of mifepristone, a drug that can be used to terminate an early pregnancy. The case would limit online-facilitated abortions and would have far-reaching consequences even in states that are not trying to restrict abortion.
Earlier this month, Republicans in the Texas state legislature introduced two bills to restrict access to abortion pills. The first bill, HB 2690, would require internet service providers (ISPs) to ban sites that provide access to the pills or information about obtaining them. Companies like AT&T and Spectrum would have to “make every reasonable and technologically feasible effort to block Internet access to information or material intended to assist or facilitate efforts to obtain an elective abortion or an abortion-inducing drug.” The bill would also forbid both publishers and ordinary people from providing information about access to abortion-inducing drugs.
The second bill, SB 1440, would make it a felony for credit card companies to process transactions for abortion pills, and would also make them liable to lawsuits from the public.
Blair Wallace, a policy and advocacy strategist at the ACLU of Texas, a nonprofit that advocates for civil liberties and reproductive choice, said the recent developments mark “a new frontier for the ways in which they’re coming for [abortion access],” adding: “It is really terrifying.”
Wallace sees it as a continuation of a strategy that seeks to criminalize whole abortion care networks with the aim of isolating people seeking abortions. More broadly, this strategy of censoring information and language has become a popular tactic in US culture wars in the last several years, and the proposed bill could incentivize platforms to aggressively remove information about abortion access out of concern for legal risk. Some sites, like Meta’s Instagram and Facebook, have reportedly removed information about abortion pills in the past.
So what might the outcome of all the Texas action be? Both the bill that targets ISPs and the mifepristone case this week are unprecedented, which means neither is likely to be successful. That said, the tactics are likely to stay. “Will we see it again next session? Will we see parts of this bill stripped down and put into amendments? There’s like a million ways that this can play out,” says Wallace. Anti-abortion political strategy is coordinated nationally even though the fights are playing out at a state level, and it’s likely that other states will target online spaces going forward.
Online abortion resources can pose risks to privacy. But there are lots of ways to access them more safely. Here are some resources I recommend.
What I am reading this week
AI had a very big news week with the release of GPT-4
- My colleague Will Douglas Heaven wrote about GPT-4’s impressive capabilities and the many things we still don’t know about OpenAI’s newest generative AI system.
- There have also been a ton of stories about what AI might do in its more advanced form, like writing our laws and being our boyfriend.
The hype around AI has been accompanied by mass layoffs of the people who understand how to use it responsibly.
- Microsoft laid off its entire responsible-AI team, saying that it was prioritizing getting OpenAI’s products “into customers hands at a very high speed,” according to Platformer.
- Trust and safety professionals across the tech sector have been hit particularly hard by the recent layoffs.
The Biden administration has threatened a ban on TikTok if the Chinese owners don’t sell their majority stake.
- It’s the first time the US has threatened a national ban, and will likely be met with legal challenges from TikTok’s parent company, ByteDance, according to NPR.
- China is also extremely unlikely to agree to the sale, The Information has reported, which could cause yet another major rift between the US and China.
- My colleague Zeyi Yang, our China reporter, wrote about how some of the fear around Chinese apps might be overblown.
What I learned this week
Humans aren’t always very good at detecting AI-written text, according to a new study published in the Proceedings of the National Academy of Science by researchers at Stanford and Cornell. Interestingly, the researchers found that AI systems can “predict and manipulate whether people perceive AI-generated language as human.” The study raises questions about transparency, copyright, and plagiarism in a world that’s rapidly filling up with AI-generated content. If you’re interested in this topic, I highly recommend reading this piece by my colleague Melissa Heikkilä about how to spot AI generated-text.