Ice Lounge Media

Ice Lounge Media

WhatsApp, the popular instant messaging app owned by Facebook, is now delivering roughly 100 billion messages a day, the company’s chief executive Mark Zuckerberg said at the quarterly earnings call Thursday.

For some perspective, users exchanged 100 billion messages on WhatsApp last New Year’s Eve. That is the day when WhatsApp tops its engagement figures, and as many of you may remember, also the time when the service customarily suffered glitches in the past years. (No outage on last New Year’s Eve!)

At this point, WhatsApp is just competing with itself. Facebook Messenger and WhatsApp together were used to exchange 60 billion messages a day as of early 2016. Apple chief executive Tim Cook said in May that iMessage and FaceTime were seeing record usage, but did not share specific figures. The last time Apple did share the figure, it was far behind WhatsApp’s then usage (podcast). WeChat, which has also amassed over 1 billion users, is behind in daily volume of messages, too.

In early 2014, WhatsApp was being used to exchange about 50 billion texts a day, its then chief executive Jan Koum revealed at an event.

At the time, WhatsApp had fewer than 500 million users. WhatsApp now has more than 2 billion users and at least in India, its largest market by users, its popularity surpasses those of every other smartphone app including the big blue app.

“This year we’ve all relied on messaging more than ever to keep up with our loved ones and get business done,” tweeted Will Cathcart, head of WhatsApp.

Sadly, that’s all the update the company shared on WhatsApp today. Mystery continues for when WhatsApp expects to resume its payments service in Brazil, and when it plans to launch its payments in India, where it began testing the service in 2018. (It has already shared big plans around financial services in India, though.)

“We are proud that WhatsApp is able to deliver roughly 100B messages every day and we’re excited about the road ahead,” said Cathcart.

Read more

What does a hologram-obsessed entrepreneur do for a second act after setting up a virtual Ronald Reagan in the Reagan Memorial Library, or beaming Jimmy Kimmel all the way from Hollywood to the Country Music Awards in Nashville?

If that entrepreneur is David Nussbaum, the founder of PORTL Hologram, the next logical step is to build a machine that can bring the joy of hologram-based communication to the masses.

That’s the goal thanks to a new $3 million round that Nussbaum’s company raised from famed venture investor Tim Draper, former Electronic Arts executive Doug Barry and longtime awards-show producer Joe Lewis.

Barry is not only backing the company, he’s also coming on board as its first chief operating officer.

Much of this interest can be traced back to the hologram performance given posthumously by Tupac Shakur back at Coachella about eight years ago.

Nussbaum turned the excitement generated by that event into a business. He bought the patents that powered Tupac’s beyond-the-grave performance, and used the technology to beam Julian Assange out of the Ecuadoran embassy he had been holed up in during his years in London and making dead stars live (and tour) again.

Those visual feats were basically just an updated version of the Pepper’s Ghost technique that stage illusionists and moviemakers have been using since it was invented by John Pepper in the 19th century.

The PORTL is a significant upgrade, according to Nussbaum.

The projector can transmit images any time of the day or night, and using PORTL’s capture studio-in-a-box means that anyone with $60,000 to spend and a white background can beam themselves into any portal anywhere in the world.

The company has sold a hundred devices and already delivered several dozen to shopping malls, airports and movie theater lobbies. “We’ve manufactured and delivered several dozen,” Nussbaum said.

Part of the selling point, beyond just the gimmick of the hologram’s next-level verisimilitude, is its interactivity. Through the studio rig and PORTL hardware, users can hear what people standing around the PORTL are saying and then respond.

For its next trick, PORTL is looking to build a miniaturized version of its system that would be about the size of a desktop computer and could be used to both record and distribute the holograms to anyone with a PORTL device.

“The minis will have all of the features to capture your content and rotoscope you out of our background and have the studio effects that is important in displaying your realistic volumetric like effect and they will beam you to any other device,” Nussbaum said.

To build out the business, the PORTL minis will have more than just communications capabilities, but recorded entertainment as well, Nussbaum said.

“The minis will be bundled with content like Peloton and Mirror bundled with very specific types of content. We are in conversations with a number of extremely well-known content creators where we would bundle a portal but will also have dedicated and exclusive content… [and] bundle that for $39 to $49 per month.”

It’s a vision that Nussbaum admits is far more expansive than his intentions — and the person he has to thank for the more ambitious vision of the business is none other than Draper.

“When I started this I thought it was going to be a novelty company,” he said. “When the pandemic hit he knew we needed to do much more than that.”

Read more

Tesla has made good on founder and CEO Elon Musk’s promise to boost the price of its “Full Self-Driving” (FSD) software upgrade option, increasing it to $10,000 following the start of the staged rollout of a beta version of the software update last week. This boosts the price of the package $2,000 from its price before today, and it has steadily increased since last May.

The FSD option has been available as an optional add-on to complement Tesla’s Autopilot driver assistance technology, even though the features themselves haven’t been available to Tesla owners before the launch of the beta this month. Even still, it’s only in limited beta, but this is the closest Musk and Tesla have come to actually launching something under the FSD moniker — after having teased a fully autonomous mode in production Teslas for years now.

Despite its name, FSD isn’t what most in the industry would define as full, Level 4 or Level 5, autonomy per the standards defined by SAE International and accepted by most working on self-driving. Musk has designed it as vehicles having the ability “to be autonomous but requiring supervision and intervention at times,” whereas Levels 4 and 5 (often considered “true self-driving”) under SAE standards require no driver intervention.

Still, the technology does appear impressive in some ways according to early user feedback — though testing any kind of self-driving software unsupervised via the general public does seem an incredibly risky move. Musk has said that we should see a wide rollout of the FSD tech beyond the beta before year’s end, so he definitely seems confident in its performance.

The price increase might be another sign of his and the company’s confidence. Musk has always maintained that users were getting a discount by handing money over early to Tesla in order to help it develop technology that would come later, so in many ways it makes sense that the price increase comes now. This also obviously helps Tesla boost margins, though it’s already riding high on earnings that beat both revenue and profit expectations from analysts.

Read more

Polestar, the electric vehicle brand that was spun out of Volvo Car Group, has issued another recall for its newest electric vehicle.

The company is voluntarily recalling nearly 4,600 vehicles over what has been described as faulty inverters, Reuters reported. Polestar said in a statement that all affected customers will be notified, beginning November 2.

“The recall involves the replacement of faulty inverters on most delivered customer vehicles,” Polestar said in its statement, adding that the inverters transform the stored energy in the battery into the power required by the electric motors. 

The required hardware can be done in a single service visit, according to the company. Vehicles in North America were not affected by the recall, a spokesperson told TechCrunch. Vehicles in Switzerland were also not affected.

The company also said the vehicles require service for its High Voltage Coolant Heater (HVCH). The HVCH is responsible for both cabin and high voltage battery heating. Faulty parts fitted to early production cars need to be replaced, the company said. The total number of affected vehicles delivered to customers is 3,150.

“As part of the actions required by the recall and service campaign, all vehicles will also be upgraded to be compatible with forthcoming Over-The-Air (OTA) updates,” the company said. “This will allow Polestar to push new software directly to Polestar 2 vehicles when OTA updates are available.”

Polestar, which in 2017 was recast as an electric performance brand aimed at producing exciting and fun-to-drive electric vehicles, started production this spring of its all-electric Polestar 2 vehicle at a plant in China. The production start was a milestone for the company that is jointly owned by Volvo Car Group and Zhejiang Geely Holding of China.

However, the company has faced some early headwinds. Polestar made its last recall on October 2 after several cars had abruptly stopped while driving. “This happened in very, very rare cases,” Polestar CEO Thomas Ingenlath said during an interview at TC Sessions: Mobility 2020, which was held in October. Ingenlath said at the time that none of the reported cases happened in the United States, nor were any of the affected vehicles involved in an accident. That issue was fixed with a software update.

Read more

American hospitals are being targeted in a wave of ransomware attacks as covid-19 infections in the US break records and push the country’s health infrastructure to the limit. As reports emerge of attacks that interrupted health care in at least six US hospitals, experts and government officials say they expect the impact to worsen—and warn that the attacks could potentially threaten patients’ lives.

“I think we’re at the beginning of this story,” said Mike Murray, CEO at the health-care security firm Scope Security. “These guys are moving very fast and very aggressively. These folks seem to be trying to collect as much money as possible very quickly. I think it will be tomorrow or over the weekend before the real scale of this is understood. Compromises are still ongoing.”

The Federal Bureau of Investigation, the Cybersecurity and Infrastructure Security Agency, and the Department of Health and Human Services published a dramatic warning on the night of Wednesday, October 28, about “imminent” ransomware threats to American hospitals. The agencies held a conference call with health-care security executives earlier that day to emphasize the need to prioritize this threat. Ransomware is a type of hack in which an attacker uses malware to hijack a victim’s system and demands payment before handing back control.

Hospitals including St. Lawrence Health System in New York, Sonoma Valley Hospital in California, and Sky Lakes Medical Center in Oregon have all said they’ve been hit by ransomware. A doctor told Reuters that one hospital had to function entirely on paper after its computers were taken offline.

Ransomware has grown into a multibillion-dollar international industry over the last decade and the pandemic has only increased profits. Is there any way to stop the threat?

One answer could be for the US government to carry out more offensive hacking operations against ransomware gangs, similar to one US Cyber Command conducted earlier this month. But today’s attacks prove that definitively disrupting the activity of these criminals is easier said than done.

The infamous ransomware gang behind these new attacks is known primarily as UNC1878 or Wizard Spider. The group, believed to be operating out of Eastern Europe, has been tracked for at least two years across hundreds of targets. 

“They’re incredibly prolific,” said Allan Liska, an intelligence analyst at the cybersecurity firm Recorded Future. “Their infrastructure is very good. You can see that because even with the takedowns Microsoft and Cyber Command have tried, they’re still able to operate. Honestly, they’re better funded and more skilled than many nation-state actors.”

The hacking tools UNC1878 uses include the notorious TrickBot trojan to gain access to victims’ systems, and the Ryuk ransomware to extort victims. Several of the tools in the group’s arsenal spare targeted machines if the systems are operating in Russian or, sometimes, other languages used in post-Soviet nations. 

The number of ransomware attacks against American hospitals has risen 71% from September to October 2020, according to the cybersecurity firm Check Point. The rest of the world has seen smaller but significant spikes in activity. Ryuk is responsible for 75% of ransomware attacks against American health-care organizations.

A patient died in September when ransomware hit a German hospital, but that attack appears to have targeted a hospital by mistake. By stark contrast, this week’s attacks are intentional.

Read more

With days still to go before the US presidential election, early voting has already topped half of all votes cast in the 2016 election, and every indication is that the electorate is energized. It makes sense, then, that in this heavily contested, highly polarized political environment (in the midst of a raging pandemic, no less), disinformation campaigns are likely to come hot and fast. 

Major platforms like Facebook and Twitter have started to take aggressive action against disinformation networks and accounts associated with the QAnon conspiracy theory. But acting to block disinformation in an attempt to stop the spread can backfire and has recently left social media platforms open to accusations of censorship

“The QAnon crackdown feels too late,” says Abby Ohlheiser, who writes about digital culture for MIT Technology Review and has been covering misinformation for years. “[It’s] as if the platforms were trying to stop a river from flooding by tossing out water in buckets.”

In this episode of Deep Tech, she is joined by Patrick Howell O’Neill, our cybersecurity reporter—who has also been writing our election integrity newsletter, the Outcome—to discuss how disinformation culture has been thriving online for years and what to expect on election day with our editor-in-chief, Gideon Lichfield. 

We also examine a proposal from Joan Donovan, the director of the Technology and Social Change Research Project at Harvard’s Kennedy School. She argues that it might be time to start regulating social media like another addictive and potentially harmful product: tobacco.

Check out more episodes of Deep Tech here.

Show notes and links:

Full episode transcript:

Gideon Lichfield: Misinformation that spreads on social media is lethal. It literally kills people. It’s fueled riots in Myanmar, killings of falsely suspected criminals in India, and the deaths of people who’ve taken fake cures or ignored safety precautions for covid-19.

It’s also threatening the integrity of next week’s US presidential election. For months, President Donald Trump and his supporters have been spreading false claims that mail-in voting is vulnerable to mass-scale fraud, even though both independent researchers and government bodies say it’s historically extremely rare.

Facebook, Twitter, YouTube and other platforms have been belatedly scrambling to respond. They’ve taken down or flagged election and covid misinformation and removed accounts associated with the QAnon conspiracy movement. But is it all too little, too late?

With me to discuss that are two of MIT Technology Review’s reporters: Abby Ohlheiser, who writes about digital culture and has been covering the misinformation beat for years, and Patrick Howell O’Neill, who covers cybersecurity and has been writing our election integrity newsletter, the Outcome.

And because misinformation is becoming an increasingly urgent problem in the political race, we’ll also look at a proposal for regulating Big Tech that borrows something from the playbook of regulating Big Tobacco.

I’m Gideon Lichfield, editor-in-chief of MIT Technology Review, and this is Deep Tech. 

Patrick you’ve been writing our election newsletter, the Outcome. And obviously one of the big topics is the misinformation in the run-up to the election itself. Facebook and Twitter have been belatedly scrambling it seems to tamp down rumors about voting fraud. But there’s a lot of worry that as the election itself happens, claims made by President Trump and others will incite violence and send people onto the streets. Do you have a sense that these platforms are ready for the kind of problems we’re going to see on election night?

Patrick Howell O’Neill: So I think that you’re right to say that it’s a problem that we all see coming, but it’s not clear how prepared we actually are. Obviously there’s the president saying that he..or refusing to say actually that he will commit to a peaceful transfer of power, which injects a lot of pretty reasonable fear about what it means if he loses. So I think there’s a lot to talk about there. First of all, there’s the fact that he’s been engaged in this long-term disinformation campaign, for lack of a better phrase, about the security of the election, and specifically about the security of mail in voting, which could end up accounting for more than 50% of votes for this election.

And then there’s the fact that that kind of plays into this larger conspiratorial narrative, a meta narrative is what the experts are calling it, about the fact that this election will be rigged and that this election will be stolen. And the idea is that this can all come to a head on election day, depending on how things go. And that kind of meta narrative might end up being too big for Facebook and Twitter and social media in general to deal with, right, because it’s not so much any one post or piece of content that is subject to moderation or fact checking. It’s that everything plays into that and frankly, so far they’ve been failing on that front in terms of really getting a hold of that general narrative that the entire thing is rigged. 

Gideon Lichfield: Right so this is the problem of misinformation as we have it today, right? Is that it varies so much. It’s so amorphous. The platforms like Facebook and Twitter have been investing a lot in AI, in techniques to identify things that are false rumors, and then get them taken down before a human can see them. But it’s difficult to do that when the nature of the rumor is changing so much. Abby, this is something that you’ve written about, particularly as regards the kind of misinformation that is coming out of the QAnon movement. 

Abby Ohlheiser: Yeah. I mean, one of the things that has happened as QAnon has grown…which is due in a great deal to the misinformation environment after the first weeks of the pandemic…As Q Anon gained more and more attention and as these platforms started to do things to take down groups that were major circulators of misinformation the tactics simply shifted in response to those posts.

So while it is certainly true that Q Anon content has been reduced on these platforms in response to some of these more aggressive policies, it has also become harder to tell what exactly counts as Q Anon content or not. Q, The sort of mysterious figure at the center of Q Anon has even put posts up for their followers, telling them not to use Q Anon hashtags anymore because it makes the content too easy to find. So this is a defining and repeated characteristic of misinformation-abuse-harassment on the internet. And it is one that the people who are really, really good at making this work and making it work on these mainstream platforms have had years of practice to get really good at in advance of this moment. 

Gideon Lichfield: I was listening a few days ago to the Sway podcast on the New York Times where Kara Swisher interviewed Alex Stamos. He used to be Facebook’s chief information security officer. And he talked there about the fact that in the early days of using AI to combat misinformation or propaganda, Facebook was dealing with fairly specific kinds of problems. Like how to take down video of a shooting that was posted on Facebook Live before people saw it or how to tamp down ISIS propaganda. And he made the point that when it’s something like ISIS, they have a very clear ideology and a very specific set of messages and an identity. When it’s white nationalism, they have fairly predictable, specific messages and memes, but when it’s something like QAnon or when it’s even when it’s something like election misinformation, the claims and the kinds of people making them can be so varied that it’s really hard to train an algorithm to do that.

Abby Ohlheiser: One of the things that also makes it so challenging right now is that the goal is not just to spread the most explicit version of these ideas on Facebook and Twitter. It’s to gain attention and following as widely as possible. And so when QAnon tries to make a campaign, reach millions of people, they’re not going to necessarily do that by talking about the core beliefs that drive QAnon, which is a very specific conspiracy theory with very specific beliefs.

They’re going to take some of the things that are part of what they believe and want others to believe. And they’re going to attach it to more mainstream ideas and causes and then try and seed their content into those more mainstream hashtags and conversations. So, yeah, I mean, it’s partially that the ideologies and the content itself is so broad and changing, and it’s also just that the.. if the goal is simply an attention, then it allows you to kind of be a little bit more flexible with what exactly you want to get attention first, as long as it kind of serves the broader goal.

Gideon Lichfield: Last week I interviewed the chief technology officers of both Facebook and Twitter at our EmTech conference and I asked them about what measures they have been taking against misinformation generally. Abby what’s your view broadly of what the platforms have been doing, particularly in the past year, to crack down? 

Abby Ohlheiser: I came across a Pew study that they conducted in early 2020, which was so just before the pandemic, they asked people about Q Anon and at that point, about 23% of American adults said they either knew a little bit or a lot about it. But when they surveyed people again in early September, that number had doubled and Republicans in particular were much more likely to, who had heard about Q Anon, were much more likely to have said that they felt it was at least a little bit good for the country. Right? 

So it’s not just that, like the platforms approach has changed. It’s also that the awareness and power of QAnon as an entity has changed. In June, I wrote that based on interviews with experts that I thought maybe it was already too late to stop QAnon. And I feel pretty good about writing that piece at this point. Since I’ve written that, the platforms have taken much more aggressive actions to try to take down QAnon accounts. And I think it’s too soon to look at the extent to which that has, or has not worked because QAnon is still in the process of adapting to those changes. But it certainly seems like at this point they basically had a three-year head start to learn how to deal with the moderation practices and with the changes and the beliefs of these companies. And that gives them a lot of tools to figure out how to get around these things or to simply move somewhere else and figure out a new way into these more mainstream conversations. I mean, also, at this point, The president has been repeatedly asked about QAnon and has repeatedly declined to condemn it in a way that would actually do anything to lessen its power.

 You can’t really report out the answer to whether. President Trump’s retweets of, at this point, hundreds of QAnon associated Twitter accounts is something that he’s doing intentionally or not. Indications are that it’s not exactly intentional. He just kind of seems to like people who like him, which is a lot of what the Q Anon movement is about. The fact that he is doing that already gives the ideology such a platform that the enforcement actions of these companies are coming at a point when they’ve already kind of reached their goal.  

Gideon Lichfield: Patrick, the platforms use a variety of methods to tamp down misinformation. Some of them are less visible, like when Facebook decides to circulate a post that it thinks is problematic to fewer people. Some are more obvious, such as when the platforms flag something that is misinformation or put some other context next to it to give an alternative source of information. We saw an example of that when Twitter blocked a link to an article in the New York Post about Hunter Biden, which was based on email, supposedly recovered from a laptop that Biden had supposedly left for repair. And there were a lot of questions about the provenance of this information, but blocking the story in that way kind of backfired on Twitter. Didn’t it?

Patrick Howell O’Neill: That’s right. So, you know, the story was no longer the story, at least for the short term, the conversation turned a little bit away from a Hunter Biden and the charges in the story, and it turned towards one of censorship and political bias. Frankly, it’s a scenario that has been kind of expected and predicted for four years since the WikiLeaks incidents in 2016. It’s been repeated and prepared for in Silicon Valley and at academics. And the particular problem is that It’s even more difficult than what happened with WikiLeaks because it’s going through American journalistic outfits.

So what is the correct course of action for a tech company, a social media company, to take in controlling that conversation? And Twitter’s decision to block the link temporarily did turn away the conversation, temporarily, towards this Republican talking point of Silicon Valley censorship and political bias and tipping the scale of the election unfairly away from Republicans.

Abby Ohlheiser: And one of the interesting things about these, like right wing claims about Silicon Valley censorship or the bias against conservative thought is that they’ve been around for years. They’ve never really been proven to be true, but they keep repeating in conversations like this because they work. 

And that certainly seems to be true with the New York Post Hunter Biden story. So I reached out to Zignal labs, which tracks kind of mentions of misinformation across online media. So social media, print media, all that stuff. And they looked at Twitter shares of the URL that Twitter blocked. And just before Twitter instituted that block the link was being shared about 5,500 times. In a 15 minute period. And immediately after the block went into effect, it jumped to about 10,000 shares every 15 minutes. So in that period of time, in which Twitter was doing something to try to reduce the spread of what they were limiting under their hacked materials policy, they actually caused shares of that link to essentially double. And as Patrick said, it then became this entire new cycle about Twitter going to war with conservatives, Twitter, showing that it’s truly pro-Biden all that stuff. And then that conversation took off and kind of became its own thing, bringing this kind of dubiously sourced article with it.

Gideon Lichfield: You kind of almost feel, sorry, maybe just a tiny bit for the social media platforms, because it seems like they’re damned if they do and they’re damned if they don’t. If they censor, if they try to suppress any misinformation, they’re accused of bias. If they leave it up they’re obviously accused of being a platform for hate. Do you feel any sympathy for them? 

Abby Ohlheiser: You asking me that? [LAUGHTER] 

Gideon Lichfield: Yes. [LAUGHTER]

Abby Ohlheiser: The feature I wrote about how things got so bad was actually a story about listening and about memory, right? So I wanted to answer the question about how things are bad now, by looking back at all the voices that these companies could have listened to going back literally a decade to make some meaningful changes that would have possibly helped these platforms address this stuff. And at least de-incentivize this from being something that clearly works so well on a daily basis.

And so I would, I would be curious what some of the people I interviewed for those stories, would feel about whether they would have sympathy for these companies because I think sometimes these companies get away with the perspective that they didn’t know how bad things were, or they’re learning with the rest of us, or they’re restraining to things as they find out from them. But if you talk to people who have been researching with, or experiencing harassment on these sites for years, You know, they’ll tell you that they told the companies what was going on. So yeah I have sympathy for any human and who is in this space and trying to deal with it, you know, journalists, people who work at these companies, researchers. But also I don’t want that sympathy to sort of occlude the fact that there were points of intervention and there was knowledge that they had years and years and years ago that they chose not to act on. And that those choices over and over again are also a part of why we’re here now.  

Gideon Lichfield: When I asked Mike Schroepfer the Facebook CTO last week at EmTech why they hadn’t acted sooner because people had been warning about Q Anon for years as, as Abby wrote, he said something that I thought was kind of telling. He said that they had gotten a lot of data on the harm that this kind of misinformation could cause. And so I said “is that it? Do you need to accumulate overwhelming data on something before you will act on it?” // And his reply to that was essentially that they wanted to be very careful about making a judgment without consulting what they called experts. 

Mike Schroepfer: A mistake I don’t wanna make is assume that I understand what other people need, what other people want or what’s happening. And so you know a way to avoid that is to rely on expertise where we have it. So, you know, for example, for dangerous organizations, we have many people with backgrounds in counter terrorism. We have many people with law enforcement backgrounds when you talk about voting interference. We have experts with backgrounds in voting rights. And so you listen to experts and you look at data.

Gideon Lichfield: One might say the experts, they should have been consulting were the researchers and particularly the women and people of color who are getting affected by all this misinformation years ago and worth saying that that it was harmful, but weren’t being listened to. 

Abby Ohlheiser:  Yeah. I mean one of the things that kept coming up in my reporting..so for instance, I interviewed Ellen Pao, who’s the former CEO of Reddit and one of the things she told me when I asked her like what these companies could have done better was. Put leadership in place that looks more and has lived experiences that are more akin to the people who actually use the site and who experienced these issues firsthand because not everybody needs data to know that racism exists on Reddit, for instance. And I think that you’re right.

That comment is incredibly telling to me because I think that, that is one of the fundamental differences between how the people I interviewed for this piece talk about misinformation and harassment as these things that are connected into these much more systemic issues that then were sort of brought online and incentivized to get worse. And people who feel like they need it kind of proven to them that that is something that would happen. 

Gideon Lichfield: So Patrick, what do you think we should be expecting on election day itself? 

Patrick Howell O’Neill: I think that when you talk about what to expect on election day itself, there’s a couple of different layers to that answer. So let’s start with the mechanics of the election itself, which so far have frankly been going pretty decently well. There’s nothing to suggest that anything is wrong with voting or counting up to this point. we’re 50 plus million votes in. 

The thing to worry about is perception. So there’s a question of, in an election with a potential majority mail in vote, what’s going to happen in terms of the results. Mail-in votes first of all, take longer to count and typically start counting later than in-person votes or early votes. And that’s a function of several kinds of legal and other kind of processes, but the end result is that we probably won’t know all the Mail-in votes on election night itself. Now how that actually plays into us knowing the overall votes, overall results of the election kind of depends on whether or not it’s a landslide, whether it’s close, what the swing States are.

A potentially bad scenario is that, you know, it’s very close, there are key results not being reported yet. And in that vacuum of information, disinformation floods in whether it’s from an actual candidate or whether it’s from a foreign adversary. It’s that kind of vacuum of information that could potentially sow discord, spark chaos, and then get into the actual nightmare scenarios, which could range in any number of ways that could, I guess go up to even violence. But even just even just discord or illegitimacy for whoever gets ultimately elected is a negative outcome here.

Gideon Lichfield: One of the leading scholars of misinformation is Joan Donovan, the director of the Technology and Social Change Research Project at Harvard’s Kennedy School. Earlier this month, she testified about misinformation to Congress, and she wrote an essay for us arguing that it might be time to start regulating social media like… cigarettes. So Joan, in your piece you talk about Tim Kendall, who is the former head of monetization for Facebook. He gave testimony to Congress last month. And he drew this interesting parallel. Where he said social media was like the tobacco industry in that tobacco firms added things called bronchodilators, sugar and menthol, to cigarettes to make it easier for people to smoke more. And social media added things like status updates, photo tagging and likes, which encouraged people to use social media more. You made the point in your piece that there’s also a parallel with tobacco if we want to understand the harms that social media can do. Can you tell us about that?

Joan Donovan: Yeah, I think that one of the things that has prevented us from really taking on the challenge that misinformation poses is a lack of a theoretical framework that moves beyond, “but it’s just my free speech” or “it’s just my choice to share and to create misinformation.”

“It’s just my choice,” was a way in which we initially understood the sale of cigarettes and the ways in which people were using smoking in public places. No one around you had a choice, whether they were going to breathe in secondhand smoke or not. And over time, epidemiologists and others started to think more clearly about, well, what are the health risks of smoking? And who else is being harmed by these individual choices that are causing what we would call negative externalities or causing undue harm to people who do not have the choice not to smoke.

And so over time you saw arguments and regulations be put in place and be rolled out in different cities, around smoking in public, banning smoking on planes, banning smoking in movie theaters. And now it’s very much a universal that there is no smoking in public places and I think it’s important to think about the right framework for regulation of misinformation, because we’re not talking about the whole of the information ecosystem. We’re talking about the kinds of information that can potentially cause people to take on undue risk.

Gideon Lichfield: Like what, for example?

Joan Donovan: Particularly the subject of medical misinformation comes to mind. If you are barraged with messages that make claims that masks don’t work, that even the wearing of a mask increases your risk of getting coronavirus because you’re somehow breathing recycled air, these ideas cause people to change their behaviors very quickly.

Gideon Lichfield: In your piece, you also wrote about another example of misinformation doing real world damage which was around the wildfires on the West coast where there were rumors that Antifa activists had been lighting them. Can you tell us a bit more about what happened there? 

Joan Donovan: Yeah. So when we study different misinformation events, we really try to get a sense of how these rumors scaled—that is, how they started to be spread across the internet, who picked them up, who believed them, and then who was impacted by them. And the interesting thing about the linkage between the rumor that antifascist protesters and Black Lives Matter were setting fires wasn’t necessarily that people were going out and searching for Antifa in the woods. 

It was really that folks had started to call law enforcement. They started to barrage local law enforcement with phone calls, which was making it hard for law enforcement to even do their work. And then there were instances where even the FBI had to tweet to say, Hey, listen, stop spreading this rumor. So we have to think about when rumors that  are political in nature, have these unintended effects of preventing law enforcement from being able to do their jobs in an already very dangerous situation.

Gideon Lichfield: So what could the platforms have done in the case of the wildfires? 

Joan Donovan: My take on this is if we’re in a situation where journalists are called into debunked stories and, academics and researchers are called in to look at and, and generate evidence of a misinformation campaign and law enforcement and the FBI are saying this is a dangerous situation, platform companies have to do more proactively to not let these kinds of rumors have that kind of negative public impact. And there’s a few different ways they can go about this, but we now have several years of data that point us to some of the most noxious offenders.

The blog that really pushed this idea into action and amplified this rumor has done this several other times. And so I think platform companies have a duty of care that they need to exercise.

Gideon Lichfield: So in the case of this blog that was spreading these rumors, the platform companies should have banned posts from it or made them less visible? What exactly should they’d have done? 

Joan Donovan: Both and. Which is to say that if it’s the case that you are looking at a stretch of misinformation and this blog or these set of accounts are the ones that continue to be at the center of amplifying it, then you should take action by removing the accounts entirely. Or you can limit the spread of things that seem to be circulating, that are what is called in the business “over-performing.” That is, there’s a normal course in which pages on Facebook tend to operate, but when certain stories start to scale out of proportion and start to reach new audiences that should trigger some kind of content review to ensure that it’s not harassing information. It’s not libelous information. It’s not misinformation. And so I think that there’s tools already available within platform companies that could be used to throttle the spread of misinformation in a more structured and transparent way.

Gideon Lichfield: Do you think it’s even possible for them to keep up? Because what is misinformation, what counts as misinformation is obviously totally dependent on the context. It keeps on changing. And so they’re always in some sense, a step behind in the world. 

Joan Donovan: Yeah. And I think that this is where we need to shift our focus from thinking about social media platforms as, you know, as a telephone or as a radio, and really start to treat them as they are, which are broadcast networks. And we need to treat them in a way that requires them to have a public interest obligation where they are required to, in some measure, produce information that is true and correct, and circulate that. As well as when they see these signals, which tend to be pretty strong suggesting that something is going viral, because it contains the seeds of a misinformation campaign.

Gideon Lichfield: On October 6th, Congress came out with its big report on antitrust and on the monopoly power that Amazon, Apple, Facebook, and Google each hold. Does that have any bearing, do you think, on this problem of misinformation?

Joan Donovan: I think when we go back and look at the last, you know, two decades at least of internet history, we have to understand what the promise of the internet was. If we think about the early internet we weren’t forecasting the death of local news, it was quite the opposite. The idea was anybody could become a news broadcaster that the promise of many websites and many ways of communicating was really where we’re at at the birth of America online and a kind of internet that was much more decentralized. But platforms are built on top of the internet and they have consolidated communication and information in, turned it into a commodity. They’ve turned data into a commodity that is incredibly profitable. So the incentive to scale is driven by profit incentives.

And what that means for misinformation though, is that misinformation to be dealt with, means you have to cut into your profits; means you have to open up the hood a bit on the kind of recommendation and search algorithms that are driving people to stay on these platforms. And so I think that as antitrust is pursued as related to misinformation, we are going to learn a lot more about how misinformation is profitable and how it keeps people on these platforms. Especially the kinds of misinformation that are conspiracy or are related to, you know, people really deeply engaging in these what we now call rabbit holes.

Gideon Lichfield: Some people think it’s not even possible for a platform like Facebook to really effectively fight misinformation. It depends so much on content being shared that if it were to really clamp down, it would undermine its own business model. 

Joan Donovan: You know, these are billion dollar companies. We’re not dealing with small markets at this stage. I think one of the things that we can look to minor apps or other kinds of platforms for is a duty of care to the communities that they serve. And the idea that a platform can be everything to everyone and also serve and protect their communities, I don’t think is, is true any longer. And what’s kind of interesting about this moment isn’t necessarily that anonymous actors and folks that are on the margins are using these technologies in really nefarious ways. We’re talking about the weaponization of social media by foreign governments, by marketers and grifters, by political operatives, by people who have some amount of resources and are turning the openness of these platforms into vulnerabilities in society to further their own, either profit or their own political ends.

And so we have to understand, where the problem lies is not necessarily with the vision of what the technology is supposed to do, but with what happens when you give this technology to already powerful people and how they weaponize it and, and use it to essentially destroy the trust that has been built up with other industries, including and especially journalism.

Gideon Lichfield: Going back to the piece that you wrote, you drew the parallel between misinformation and secondhand smoke. In the case of tobacco, we dealt with that by imposing high taxes on it. You’re not suggesting we tax misinformation. So what do you think is the role for government in regulating social media companies? 

Joan Donovan: Yeah. I think that there needs to be new policy. Of course, everybody is calling for a revisioning of section 230, which has to do with the roles and responsibilities of platforms and other internet companies. 

It’s the rule that says essentially that you, you should get rid of things that are noxious, like child pornography but that the company is not going to be held responsible for things that people do on their software. Essentially, right? If someone sets up a gambling ring through email, you know, Gmail is not responsible for that. And I think that there’s ways in which 230 facilitated the growth of these platforms in a way that did not anticipate where we have ended up—which is essentially where different kinds of elites have taken over and are using them in these dangerous ways. 

When we have the policy conversation, we need to focus on the harms. And we haven’t done so yet. Most of the time the policy conversation gets wrapped up in legalese around free speech and whose speech matters, but we need to shift and understand the true costs of misinformation. That is; who are the ones that have to repair the damage caused? 

Gideon Lichfield: That’s it for this episode of Deep Tech. This is a podcast just for subscribers of MIT Technology Review, to bring alive the issues our journalists are thinking and writing about.

Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. I’m Gideon Lichfield. Thanks for listening.

Read more

Welcome home welcome home oh oh oh the world is beautiful the world. They’re not the most catchy lyrics. But after I’ve listened to “Beautiful the World” half a dozen times, the chorus is stuck in my head and my foot is tapping. Not bad for a melody generated by an AI trained on a data set of Eurovision songs and koala and kookaburra cries.  

Back in May, “Beautiful the World” won the AI Song Contest, a competition run by Dutch broadcaster VPRO, in which 13 teams from around the world tried to produce a hit pop song with the help of artificial intelligence.

The winning entry was created by Uncanny Valley, a team of musicians and computer scientists from Australia that used both human songwriting and AI contributions. “Their music was exciting,” says Anna Huang, an AI researcher at Google Brain, who was one of the competition judges. “The hybrid effort really shined.”

Many believe that the near-term usefulness of AI will come via collaboration, with teams of humans and machines working together, each playing to their strengths. “AI can sometimes be an assistant, merely a tool,” says Carrie Cai, a colleague of Huang’s at Google Brain who studies human-computer interaction. “Or AI could be a collaborator, another composer in the room. AI could even level you up, give you superpowers. It could be like composing with Mozart.”

But for this to happen, AI tools will need to be easy to use and control. And the AI Song Contest proved a useful test of how to achieve that.

Huang, Cai, and their colleagues have looked at the various strategies different teams used to collaborate with the AIs. In many cases, the humans struggled to get the machines to do what they wanted and ended up inventing workarounds and hacks. The researchers identify several ways that AI tools could be improved to make collaboration easier.

A common problem was that large AI models are hard to interact with. They might produce a promising first draft for a song. But there was no way to give the model feedback for a second pass. The teams could not go in and tweak individual parts or instruct the AI to make the melody happier.

In the end most teams used smaller models that produced specific parts of a song, like the chords or melodies, and then stitched these together by hand. Uncanny Valley used an algorithm to match up lyrics and melodies that had been produced by different AIs, for example.

Another team, Dadabots x Portrait XO, did not want to repeat their chorus twice but couldn’t find a way to direct the AI to change the second version. In the end the team used seven models and cobbled together different results to get the variation they wanted.

It was like assembling a jigsaw puzzle, says Huang: “Some teams felt like the puzzle was unreasonably hard, but some found it exhilarating, because they had so many raw materials and colorful puzzle pieces to put together.”

Uncanny Valley used the AIs to provide the ingredients, including melodies produced by a model trained on koala, kookaburra, and Tasmanian devil noises. The people on the team then put these together.

“It’s like having a quirky human collaborator that isn’t that great at songwriting but very prolific,” says Sandra Uitdenbogerd, a computer scientist at RMIT University in Melbourne and a member of Uncanny Valley. “We choose the bits that we can work with.”

But this was more compromise than collaboration. “Honestly, I think humans could have done it equally well,” she says.

Generative AI models produce output at the level of single notes—or pixels, in the case of image generation. They don’t perceive the bigger picture. Humans, on the other hand, typically compose in terms of verse and chorus and how a song builds. “There’s a mismatch between what AI produces and how we think,” says Cai.

Cai wants to change how AI models are designed to make them easier to work with. “I think that could really increase the sense of control for users,” she says.

It’s not just musicians and artists who will benefit. Making AIs easier to use, by giving people more ways to interact with their output, will make them more trustworthy wherever they’re used, from policing to health care.

“We’ve seen that giving doctors the tools to steer AI can really make a difference in their willingness to use AI at all,” says Cai.

Read more

Early voting data shows that voter participation in the 2020 US presidential election is already at an all-time high in many states. With only days remaining before voting ends on November 3, more than 70 million Americans have cast ballots.

This unprecedented early turnout, and the complications presented by the covid-19 pandemic, have brought intense scrutiny to election administrators nationwide. Every hiccup and anomaly in how elections are run seems to give partisans at either end of the political spectrum a reason to accuse opponents of misdeeds.

But citing every error in election administration as evidence of malfeasance could undermine voter confidence. Even well-intentioned criticisms may make things seem worse than they actually are.

Yes, citizens should hold election administrators to very high standards, but it’s also true that human error and technology issues cause problems in every election. And this year, election administrators, poll workers, and vendors are dealing with the additional difficulties of a pandemic.

As Election Day approaches, Americans must take care to distinguish between relatively harmless election mishaps and cases of true malfeasance. For almost any technical glitch, the former is a far more likely explanation than the latter.

So if someone claims that election problems are evidence of a nefarious political plot, take a moment to consider other possible causes. Misprinted ballots, for example, can result from data errors and overwhelmed election officials’ failure to proofread carefully. Long lines at the polls might be caused by bandwidth issues with online check-in systems, rather than deliberate efforts to suppress voters.

To be sure, technology problems can adversely impact voters, and must be addressed whenever they arise. Election officials, voters, and the media should clearly present the facts when describing those issues.

But if voters and the media are not careful in evaluating claims about political meddling, they may unwittingly spread disinformation.

Voting tech

While new technology can introduce risks to any process, it can also improve elections if those risks are properly managed. My research into election operations at the nonprofit OSET Institute points to three key areas where the right combination of policy and technology could help voters—and where failing to use technology may actually hurt them.

Managing lines. Electronic poll books and online access to voter registration systems can streamline the check-in process and reduce waits at the polls. These systems are particularly useful during early voting or for same-day voter registration on Election Day (where policy allows it), because poll workers must be able to access the registration records of any potential voter who shows up—not just those registered to their specific location. But if these systems and networks aren’t properly tested ahead of time, they can malfunction and cause delays.

Tracking mail-in ballots. Ballot-tracking software and intelligent mail barcodes (IMb), which the US Postal Service uses to sort and track mail, can make mail-in voting more transparent and accountable. This technology can show voters their ballot’s whereabouts as it makes its way through the postal system. More than 45 states currently offer some version of this service. But not all states do, which leaves some voters in the dark. This may make those voters more vulnerable to disinformation about how their ballots are being handled—especially if they’re voting by mail for the first time.

Reporting results. Modern voting technology can scan hundreds of mail-in ballots per minute to record voters’ choices. It also allows election officials to digitally adjudicate any questionable voter marks without ever handling the physical ballots themselves. In this way, scanning technology can help us count votes faster.

Unfortunately, outdated policies in some states are slowing this process down. Most states allow election officials to start scanning mail-in ballots weeks before Election Day. But in other states (including Pennsylvania and Wisconsin), officials must wait until Election Day to begin opening them. Such policies create a bottleneck in the counting process and draw it out until well past Election Day, extending the window during which election disinformation could spread.

All three of these examples show how election technology can facilitate voting if used appropriately. In the end, though, technology will neither make nor break the election. Instead, a combination of policy, procedures, technology, and personnel shapes how the vote is recorded. It’s essential for voters to maintain perspective: technology and process errors are likely just errors, not evidence of political mischief. To tell the difference, voters should rely on trusted sources—namely, state and local election officials who are on the front lines of democracy.

Edward Perez is an expert in election technology and election administration. He is global director of technology development at the OSET Institute, a nonpartisan, nonprofit organization engaged in election infrastructure research and public technology development.

Read more

The US presidential election next Tuesday will shape the world for years, if not decades, to come. Not only because Joe Biden and Donald Trump have radically different ideas about immigration, health care, race, the economy, climate change, and the role of the state itself, but because they represent very different visions of the US’s future as a technology superpower.

As a nonprofit, MIT Technology Review cannot endorse a candidate. Our main message is that whoever wins, it will not be enough for him to fix the US’s abject failures in handling the pandemic and to take climate change seriously. He will also have to get the country back on a competitive footing with China, a rapidly rising tech superpower that now has the added advantage of not being crippled by covid-19. To do that, he’ll have to make up for years of government neglect—long predating the current president—of the kind of research that made the US the world’s technology center in the first place.

The Trump scorecard

The president’s record on science and technology speaks for itself. From the start of the pandemic, he has proudly discounted the recommendations of experts. He has turned the Centers for Disease Control, once one of the world’s most trusted public-health agencies, into a stumbling bureaucratic joke; pressured the Food and Drug Administration to give hasty approval to unproven, possibly dangerous treatments and vaccines; treated his own coronavirus task force as largely irrelevant; and sidelined Anthony Fauci, the nation’s top infectious-disease expert, whom he called a “disaster.” At a recent rally, he mocked Biden for promising to “listen to the scientists”; by contrast, 81 Nobel laureates signed a letter supporting Biden for precisely that reason. Science, Nature, the New England Journal of Medicine, and the Lancet, arguably the four most important scientific journals in the world, have all slammed Trump’s handling of covid.

The president’s attitude toward climate science is, of course, equally dismissive. He has pulled the US out of the Paris accord; suggested global warming is a blip (“It’ll start getting cooler. You just watch”); rolled back a slew of regulations on pollution, greenhouse-gas emissions, fossil-fuel extraction, toxic chemicals, and other environmental issues; and tried—unsuccessfully—to block states from setting stricter emissions targets than the federal government.

These policies reflect the administration’s broader disdain for science and technology as a whole. Each year, the Trump White House has proposed deep cuts to non-defense-related research funding at agencies like the National Science Foundation, the National Institutes of Health, the Environmental Protection Agency, and the Department of Energy. Each year Congress has granted increases instead. That may be harder this time, when legislators are also trying to keep a battered economy afloat. The House bills passed so far just barely keep research funding at last year’s levels.

There are small bright spots. This year’s budget proposal from the administration, though it cuts 6.5% from the NSF, nearly doubles the agency’s research spending on artificial intelligence and quantum information science, technologies that could be economically and militarily important. The proposal also boosts NASA’s funding by 12%. However, much of that is to support Vice President Mike Pence’s vision of getting astronauts back to the moon by 2024—a showy, nostalgic, but unrealistic goal, conveniently timed for when Pence might run for president. Less flashy but more scientifically valuable research programs at NASA will be cut.

Rising in the east, setting in the west

Even if Joe Biden wins and reverses these policies, he will have to contend with a weakening of the US’s technological primacy that began well before Trump. The country that birthed Silicon Valley has become complacent about maintaining the scientific and industrial base that made the Valley possible.

For decades, the US has been turning its back on the essential role of government in supporting science and technology. Government-funded R&D has dropped from more than 1.8% of GDP in the mid-1960s, when it was at its peak, to just over 0.6% now (chart 1). Private-sector funding has made up for the drop.

Chart 1

The government’s share of funding for basic research—the precursor to the kinds of technologies companies can exploit—has been dropping too, from above 70% in the mid-20th century to 42% in 2017. Again, the private sector has filled the gap, but its priorities are different; much of the replacement money is in pharma. Governments are more likely to fund long-term, risky bets like clean energy, sustainable materials, or smart manufacturing—the kinds of technologies the world really needs right now.

Contrast this with the situation in China. There, government-funded R&D has gradually grown as a percentage of GDP (chart 2), even as the economy has exploded in size. The true measure of government investment is probably higher, since a lot of the private-sector R&D spending is by state-owned enterprises that to some extent take orders from the government.

Chart 2

And overall, China’s R&D spending is shooting up, approaching the level in the US (chart 3).

Chart 3

True, China is still far behind on many measures. Basic research, though it’s growing, still represents a much smaller share of GDP than in the US or other advanced economies (chart 4). Also, as we’ve written, although the number of scientific papers and patents published by Chinese researchers is ballooning, the quality of that work (as measured by things like the number of citations) is low, and homegrown Nobel laureates are few and far between.

Chart 4

Nonetheless, the gap is closing. Kai-fu Lee, a venture capitalist and former head of Google China, expressed an oft-heard view at a recent event held by the New York–based China Institute: the US, he said, is “further ahead in fundamental research in AI as well as almost any other domain,” but China is “catching up quickly” and has an edge in AI applications that require masses of data, such as machine translation and speech recognition. (Our China issue looked at several other areas in which the country is carving out an advantage.)

Much of China’s technological acceleration is linked to state-led plans such as “Made in China 2025,” which aims to make China more self-sufficient (pdf, page 21) in key high-tech industries like zero-emission vehicles, industrial robots, mobile-phone chips, and medical devices. This is in stark contrast to the US approach, where the main driver of decisions about where the money goes has been venture capitalists and the increasingly deep-pocketed tech giants, all of them desperate to find the next product idea that can rapidly scale into a billion-dollar business.

Of course, one should take the claims made about schemes like Made in China 2025 with a pinch of salt. The shortcomings of centrally planned economies are well documented, and governments are usually not very good at innovation. The regulatory reforms in the mid-20th century that paved the way for the venture capital industry are arguably some of the most important technology policies the US ever adopted.

Still, it’s become increasingly clear in the West that while the venture capital model is good at building things people want, it’s less good at producing things society needs in order to solve hard, long-term problems like pandemics and climate change.

Recently, Western economists such as University College London’s Mariana Mazzucato have been breathing credibility into the idea that governments should be more active in setting economic and technological priorities. In recent decades this kind of interventionism, known as industrial policy, has had a bad name; picking favorite sectors or companies to support tends to backfire. But Mazzucato calls for an approach that instead aims at a broad-based transformation, such as greening the economy. Other economists, like MIT’s Daron Acemoglu, argue that letting Silicon Valley set the agenda has not only limited innovation to the types of inventions that can make quick profits, but contributed to the growth of inequality.

The pandemic provides a telling illustration of America’s and China’s relative strengths. American companies—Moderna, Johnson & Johnson, Pfizer, and Novavax—are among the handful that currently have a covid-19 vaccine in phase 3 clinical trials. So are several Chinese firms—Sinovac, CanSino Biologics, and Fosun Pharma. But the US’s industrial base, depleted by decades of outsourcing, was pitifully incapable of mass-producing protective equipment, ventilators, and testing materials in the early days of the pandemic, while China’s ramped up in no time.

In other words, the old stereotype that the US invents things and China manufactures them is more out of date than ever. China is catching up to the US as an inventor and leaving it in the dust as a manufacturer. This is a good thing for the world as a whole; more competition means more sources of new ideas. But the US’s position in such a world is looking increasingly weak.

Facing the challenge

This summer, in response to both the US’s failures in the pandemic and the competition from China, a bipartisan group of legislators led by the Democratic senator Chuck Schumer and the Republican Todd Young introduced the Endless Frontier Act. It calls for investing $100 billion over five years to expand the NSF and to fund research in key fields, such as AI, quantum computing, biotech, advanced energy, and materials science. Though the bill was quickly forgotten as legislators bickered over fiscal stimulus and the Supreme Court nomination, it was a hopeful sign that politicians on both sides of the aisle are beginning to recognize the importance of science to reinvigorating the economy.

Biden has proposed spending even more—$300 billion over four years—on federal investments in R&D. His plan calls for major increases to various agencies, including the NSF and NIH, as well as “new breakthrough technology programs” in areas such as AI, 5G, and advanced materials. It also proposes a new Advanced Research Projects Agency for Health (ARPA-H) to further support medical research.

The Trump administration has been generally less specific on many technology topics and less enthusiastic about broadly funding research. Though it has generally sought cuts to R&D, especially in clean energy, it has increased investment in five key “industries of the future”—AI, quantum computing, 5G, advanced manufacturing, and biotechnology—albeit not on the scale Biden is calling for. Much of its attention has gone to reducing what it argues are barriers to innovation, such as regulations and taxes.

Biden’s promises, of course, would be costly to keep (although they’re dwarfed by this year’s stimulus bills, and both candidates plans’ would likely add trillions of dollars to the national debt over the coming decade). And it’s far from clear whether he would be able to follow through on them, or what the results would be. But for comparison, Made in China 2025 was launched in 2015, and in that year alone, the Chinese government created about $220 billion worth (pdf, p. 17) of state-backed investment funds to support it.

Another clear difference between the candidates is their attitude to immigration. Biden plans to expand the number of visas for highly skilled foreign workers, like the H-1B. The tech industry relies heavily on these workers—there’s a shortage of skilled labor even in the midst of a recession—and research shows that issuing visas to them also has the knock-on effect of creating new jobs for US-born workers. The Trump administration, however, is restricting those visas, and also plans to impose caps on the length of student visas, making it harder for students to finish their degrees.

Do foreign workers and students enjoy the benefits of a stay in America only to then set up shop in their home countries? Of course. Do they sometimes steal US intellectual property? No question. But it’s not a one-way trade. As long as the US remains a desirable place for people to study and work, some proportion of them will stay, and contribute their skills and energies here instead of taking them back home.

Already, countries like Canada and France are taking advantage of the US’s tighter visa policy by making it easier for foreign tech workers to come to them instead. Meanwhile, China’s “Thousand Talents Plan” invests heavily in getting both Chinese-born and foreign scientists to do their research in China—and, it’s alleged, enables the theft of American intellectual property. But what’s the best way for the US to respond: cut domestic research funding and visas to push even more scientists into China’s arms, or create a flourishing and welcoming research environment to make them want to stay?

An area Biden’s plan doesn’t mention, but that urgently needs addressing, is patents. They are routinely given for ideas that are obvious and in widespread use—IBM got a patent for out-of-office email autoreplies in 2017—as well as for things that are physically impossible, like anti-gravity devices. As Zia Qureshi, a fellow at The Brookings Institution, wrote in 2018, “Lawsuits by patent trolls comprise more than three-fifths of all lawsuits for IP infringement in the U.S., and cost the economy an estimated $500 billion in 1990-2010.”

This is one of those issues where reform notionally enjoys bipartisan support but, in practice, has been watered down by special interests. The next president needs to advocate for common-sense measures ensuring that patents are actually granted only to truly novel ideas, for limited periods of time.

An endless frontier

The name of Schumer and Young’s Endless Frontier Act is a reference to a report by Vannevar Bush, who had coordinated American research during World War II. As the war’s end came into sight, President Franklin Roosevelt asked Bush for ideas about how to apply scientific knowledge “in the days of peace ahead” for “the improvement of the national health, the creation of new enterprises bringing new jobs, and the betterment of the national standard of living.”

The resulting report, titled “Science, The Endless Frontier,” outlined in great detail how federal investments in science could help. Although many of its recommendations were initially scuppered by political backbiting, it would become a lasting argument for the government’s role in funding science to address the country’s most critical challenges.

That was 75 years ago, and those were very different times. In the interim, the prevailing wisdom about the respective roles of government and the private sector has shifted. But the value of science in solving our problems—a theme that Bush constantly returned to—has not changed, and the need for government to support the creation of that new knowledge is once again clear. The last few months of the pandemic have taught this lesson, and the contest with China in the years to come will hammer it home. The only question is whether the US will learn it the hard way.

Read more

Are you using YouTube to grow your business? Wondering how to get your videos in front of more people? In this article, you’ll discover how to optimize your YouTube videos for more visibility in Google search results. Why Optimize YouTube Videos for Google and YouTube Search? YouTube is the world’s most popular video sharing site […]

The post How to Optimize Your YouTube Videos for Google Search Visibility appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,413 2,414 2,415 2,416 2,417 2,480