The Senate Judiciary Committee voted in favor of issuing subpoenas for Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey Thursday, meaning that there might be two big tech CEO hearings on the horizon.
Republicans in the committee declared their interest in a hearing on “the platforms’ censorship of New York Post articles” after social networks limited the reach of a dubious story purporting to contain hacked materials implicating Hunter Biden, Joe Biden’s son, in impropriety involving a Ukrainian energy firm. Fox News reportedly passed on the story due to doubts about its credibility.
Tech’s decision to take action against the New York Post story was bound to ignite Republicans in Congress, who have long claimed, with scant evidence, that social platforms deliberately censor conservative voices due to political bias. The Senate Judiciary Committee is chaired by Lindsey Graham (R-SC), a close Trump ally who is now in a much closer than expected race with Democratic challenger Jaime Harrison.
According to a motion filed by Graham, the hearing would address:
(1) the suppression and/or censorship of two news articles from the New York Post titled “Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad” and “Emails reveal how Hunter Biden tried to cash in big on behalf of family with Chinese firm,” (2) any other content moderation policies, practices, or actions that may interfere with or influence elections for federal office, and (3) any other recent determinations to temporarily reduce distribution of material pending factchecker review and/or block and mark material as potentially unsafe.
Earlier in October, the Senate Commerce Committee successfully leveraged subpoena power to secure Dorsey, Zuckerberg and Alphabet’s Sundar Pichai for testimony for their own hearing focused on Section 230, the critical law that shields online platforms from liability for user created content.
The hearing isn’t scheduled yet, nor have the companies publicly agreed to attend. But lawmakers have now established a precedent for successfully dragging tech’s reluctant leaders under oath, making it more difficult for some of the world’s wealthiest and most powerful men to avoid Congress from here on out.
A pair of U.S. Representatives — one from each party — are proposing a law that would limit the president’s ability to shut down the internet at will. That may not strike you as an imminent threat, but federal police disappearing protestors into unmarked vans probably didn’t either, until a couple months ago. Let’s keep an open mind.
The president has the power under the Communications Act’s Section 706 to order the shutdown of some communications infrastructure in an emergency. While this was likely intended more for making sure official phone calls could get through in a national emergency, it’s possible that today it could be used as a measure to tamp down on protests and civil unrest, as we’ve seen in authoritarian regimes around the world.
The Preventing Unwarranted Communications Shutdowns Act, from Rep. Anna Eshoo (D-CA) and Rep. Morgan Griffith (R-VA), doesn’t remove this ability, but adds several layers of accountability to it.
In the first place, the bill would limit Section 706 use to when there is an “imminent and specific threat to human life or national security.” This prevents it from being put into play when there is a more general “threat” such as a major protest that might be too much for local police to handle.
The bill would also require the president to inform the top layer of government officials, including opposition leaders, of any shutdown. Ideally before, but it could be up to 12 hours later (and is illegal if not by then). Any shutdown ends automatically after 48 hours unless three-fifths of Congress vote to continue it.
The U.S. government would also be obligated to compensate providers and customers for the monetary value of the shutdown’s impact. This could end up being quite expensive, depending on how it’s calculated.
Lastly, a General Accountability Office report is required after every use of Section 706, and they get to the bottom of everything.
Whether this bill has any chance of becoming law is, like practically everything these days, anyone’s guess. But bipartisan laws limiting potential curtailments of civil rights by the White House are probably going to be fairly popular after the feds’ shenanigans over the summer.
At the very least it has some heavy hitters offering glowing blurbs:
FCC Commissioner Jessica Rosenworcel: “In the United States our laws are dated and they offer virtually unchecked power to the president over our wired and wireless communications when we face peril or national emergency. So kudos to Congresswoman Eshoo for legislation to modernize our laws and put in place safeguards to ensure that the internet stays on when we need it most.”
Former Secretary of Homeland Security Michael Chertoff: “A long overdue check and balance on a President’s authority to shut down or significantly curtail internet communication under the guise of an emergency.”
Former FCC Chairman Tom Wheeler: “In a time of emergency, how the internet operates is in the hands of one person. Defining that authority in a focused manner and adding congressional oversight would bring an old statute into the digital age.”
Cybersecurity mainstay Bruce Schneier: “The Internet is critical infrastructure, and needs to be protected from politically motivated shut-downs. This bill helps ensures that the communications censorship that is increasingly common in other countries doesn’t happen in the US. It adds process, and checks and balances, to what is currently an ad hoc authority.”
You can read more about the bill and read the full text here.
Quibi is shutting down — we know that much for sure.
But when? If you’re looking to blast through all 25 episodes of the Reno 911 revival series before Quibi calls its quits, how long do you actually have?
While it seems even Quibi isn’t 100% certain yet, they’ve at least now given users a rough idea of when they expect the plug to be pulled: early December.
As spotted by Variety, a newly published support page on the Quibi site says streaming will end “on or about December 1, 2020.” The “about” suggests that the shutdown date isn’t fully locked quite yet, but it should be sometime around then.
Will any Quibi shows find their way over to Netflix, Hulu, etc.? That’s still up in the air, too. “At this time we do not know if the Quibi content will be available anywhere after our last day of service,” the company writes in a note on the same page.
We’re excited to announce an update to the Extra Crunch Partner Perk from Zendesk. Starting today, annual and two-year Extra Crunch members that are new to Zendesk, and meet their startup qualifications, can now receive six months of free access to Zendesk’s Sales CRM, in addition to Zendesk Support Suite, Zendesk Explore and Zendesk Sunshine.
Here is an overview of the program.
Zendesk is a service-first CRM company with support, sales and customer engagement products designed to improve customer relationships. This offer is only available for startups that are new to Zendesk, have fewer than 100 employees and are funded but have not raised beyond a Series B.
The Zendesk Partner Perk from Extra Crunch is inclusive of subscription fees, free for six months, after which you will be responsible for payment. Any downgrades to your Zendesk subscription will result in the forfeiture of the promotion, so please check with Zendesk first regarding any changes (startups@zendesk.com). Some add-ons such as Zendesk Talk and Zendesk Sell minutes are not included. Complete details of what’s included can be found here.
Facebook’s dating feature expands after a regulatory delay, we review the new Amazon Echo and President Donald Trump has an on-the-nose Twitter password. This is your Daily Crunch for October 22, 2020.
The big story: Facebook Dating comes to Europe
Back in February, Facebook had to call off the European launch date of its dating service after failing to provide the Irish Data Protection Commission with enough advanced notice of the launch. Now it seems the regulator has given Facebook the go-ahead.
Facebook Dating (which launched in the U.S. last year) allows users to create a separate dating profile, identify secret chats and go on video dates.
As for any privacy and regulatory concerns, the commission told us, “Facebook has provided detailed clarifications on the processing of personal data in the context of the Dating feature … We will continue to monitor the product as it launches across the EU this week.”
The tech giants
Amazon Echo review: Well-rounded sound — This year’s redesign centers on another audio upgrade.
Facebook adds hosting, shopping features and pricing tiers to WhatsApp Business — Facebook is launching a way to shop for and pay for goods and services in WhatsApp chats, and it said it will finally start to charge companies using WhatsApp for Business.
Spotify takes on radio with its own daily morning show — The new program will combine news, pop culture, entertainment and music personalized to the listener.
Startups, funding and venture capital
Chinese live tutoring app Yuanfudao is now worth $15.5 billion — The homework tutoring app founded in 2012 has surpassed Byju’s as the most valuable edtech company in the world.
E-bike subscription service Dance closes $17.7M Series A, led by HV Holtzbrinck Ventures — The founders of SoundCloud launched their e-bike service three months ago.
Freelancer banking startup Lili raises $15M — It’s only been a few months since Lili announced its $10 million seed round, and it’s already raised more funding.
Advice and analysis from Extra Crunch
How unicorns helped venture capital get later, and bigger — Q3 2020 was a standout period for how high late-stage money stacked up compared to cash available to younger startups.
Ten Zurich-area investors on Switzerland’s 2020 startup outlook — According to official estimates, the number of new Swiss startups has skyrocketed by 700% since 1996.
Four quick bites and obituaries on Quibi (RIP 2020-2020) — What we can learn from Quibi’s amazing, instantaneous, billions-of-dollars failure.
(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)
Everything else
President Trump’s Twitter accessed by security expert who guessed password “maga2020!” — After logging into President Trump’s account, the researcher said he alerted Homeland Security and the password was changed.
For the theremin’s 100th anniversary, Moog unveils the gorgeous Claravox Centennial — With a walnut cabinet, brass antennas and a plethora of wonderful knobs and dials, the Claravox looks like it emerged from a prewar recording studio.
Announcing the Agenda for TC Sessions: Space 2020 — Our first-ever dedicated space event is happening on December 16 and 17.
The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.
A pandemic is raging with devastating consequences, and long-standing problems with racial bias and political polarization are coming to a head. Artificial intelligence (AI) has the potential to help us deal with these challenges. However, AI’s risks have become increasingly apparent. Scholarship has illustrated cases of AI opacity and lack of explainability, design choices that result in bias, negative impacts on personal well-being and social interactions, and changes in power dynamics between individuals, corporations, and the state, contributing to rising inequalities. Whether AI is developed and used in good or harmful ways will depend in large part on the legal frameworks governing and regulating it.
There should be a new guiding tenet to AI regulation, a principle of AI legal neutrality asserting that the law should tend not to discriminate between AI and human behavior. Currently, the legal system is not neutral. An AI that is significantly safer than a person may be the best choice for driving a vehicle, but existing laws may prohibit driverless vehicles. A person may manufacture higher-quality goods than a robot at a similar cost, but a business may automate because it saves on taxes. AI may be better at generating certain types of innovation, but businesses may not want to use AI if this restricts ownership of intellectual-property rights. In all these instances, neutral legal treatment would ultimately benefit human well-being by helping the law better achieve its underlying policy goals.
Consider the American tax system. AI and people are engaging in the same sorts of commercially productive activities—but the businesses for which they work are taxed differently depending on who, or what, does the work. For instance, automation allows businesses to avoid employer wage taxes. So if a chatbot costs a company as much as before taxes as an employee who does the same job (or even a bit more), it actually costs the company less to automate after taxes.
In addition to avoiding wage taxes, businesses can accelerate tax deductions for some AI when it has a physical component or falls under certain exceptions for software. In other words, employers can claim a large portion of the cost of some AI up front as a tax deduction. Finally, employers also receive a variety of indirect tax incentives to automate. In short, even though the tax laws were not designed to encourage automation, they favor AI over people because labor is taxed more than capital.
And AI does not pay taxes! Income and employment taxes are the largest sources of revenue for the government, together accounting for almost 90% of total federal tax revenue. Not only does AI not pay income taxes or generate employment taxes, it does not purchase goods and services, so it is not charged sales taxes, and it does not purchase or own property, so it does not pay property taxes. AI is simply not a taxpayer. If all work were to be automated tomorrow, most of the tax base would immediately disappear.
When businesses automate, the government loses revenue, potentially hundreds of billions of dollars in the aggregate. This may significantly constrain the government’s ability to pay for things like Social Security, national defense, and health care. If people eventually get comparable jobs, then the revenue loss is only temporary. But if job losses are permanent, the entire tax structure must change.
Debate about taxing robots took off in 2017 after the European Parliament rejected a proposal to consider a robot tax and Bill Gates subsequently endorsed the idea of a tax. The issue is even more critical today, as businesses turn to the use of robots as a result of pandemic-related risks to workers. Many businesses are asking: Why not replace people with machines?
Automation should not be discouraged on principle, but it is critical to craft tax-neutral policies to avoid subsidizing inefficient uses of technology and to ensure government revenue. Automating for the purpose of tax savings may not make businesses any more productive or result in any consumer benefits, and it may result in productivity decreases to reduce tax burdens. This is not socially beneficial.
The advantage of tax neutrality between people and AI is that it permits the marketplace to adjust without tax distortions. Businesses should then automate only if it will be more efficient or productive. Since the current tax system favors automation, a move toward a neutral tax system would increase the appeal of workers. Should the pessimistic prediction of a future with substantially increased unemployment due to automation prove correct, the revenue from neutral taxation could then be used to provide improved education and training for workers, and even to support social benefit programs such as basic income.
Once policymakers agree that they do not want to advantage AI over human workers, they could reduce taxes on people or reduce tax benefits given to AI. For instance, payroll taxes (which are charged to businesses on their workers’ salaries) should perhaps be eliminated, which would promote neutrality, reduce tax complexity, and end taxation of something of social value—human labor.
More ambitiously, AI legal neutrality may prompt a more fundamental change in how capital is taxed. Though new tax regimes could directly target AI, this would likely increase compliance costs and make the tax system more complex. It would also “tax innovation” in the sense that it might penalize business models that are legitimately more productive with less human labor. A better solution would be to increase capital gains taxes and corporate tax rates to reduce reliance on revenue sources such as income and payroll taxes. Even before AI entered the scene, some tax experts had argued for years that taxes on labor income were too high compared with other taxes. AI may provide the necessary impetus to finally address this issue.
Opponents of increased capital taxation largely base their arguments on concerns about international competition. Harvard economist Lawrence Summers, for instance, argues that “taxes on technology are likely to drive production offshore rather than create jobs at home.” These concerns are overstated, particularly with respect to countries like the United States. Investors are likely to continue investing in the United States even with relatively high taxes for a variety of reasons: access to consumer and financial markets, a predictable and transparent legal system, and a well-developed workforce, infrastructure, and technological environment.
A tax system informed by AI legal neutrality would not only improve commerce by eliminating inefficient subsidies for automation; it would help to ensure that the benefits of AI do not come at the expense of the most vulnerable, by leveling the playing field for human workers and ensuring adequate tax revenue. AI is likely to result in massive but poorly distributed financial gains, and this will both require and enable policymakers to rethink how they allocate resources and distribute wealth. They may realize we are not doing such a good job of that now.
Ryan Abbott, is Professor of Law and Health Sciences at the University of Surrey School of Law and Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA.
Facebook has published a new guide which looks at emerging consumer behaviors as a result of COVID-19.
TikTok has announced a new notification system which will provide more insight into moderation decisions.