Ice Lounge Media

Ice Lounge Media

Here’s another edition of “Dear Sophie,” the advice column that answers immigration-related questions about working at technology companies.

“Your questions are vital to the spread of knowledge that allows people all over the world to rise above borders and pursue their dreams,” says Sophie Alcorn, a Silicon Valley immigration attorney. “Whether you’re in people ops, a founder or seeking a job in Silicon Valley, I would love to answer your questions in my next column.”

Extra Crunch members receive access to weekly “Dear Sophie” columns; use promo code ALCORN to purchase a one or two-year subscription for 50% off.


Dear Sophie:

I’ve been reading about the new H-1B rules for wage levels and defining what types of jobs qualify that came out this week. What do we as employers need to do to comply? Are any other visa types affected?

— Racking my brain in Richmond! 🤯

Dear Racking:

As you mentioned, the Department of Labor (DOL) and the Department of Homeland Security (DHS) each issued a new interim rule this week that affects the H-1B program. However, the DOL rule impacts other visas and green cards as well. These interim rules, one of which took effect immediately after being published, are an abuse of power.

The president continues to fear-monger in an attempt to generate votes through racism, protectionism and xenophobia. The fatal irony here is that companies were in fact already making “real offers” to “real employees” for jobs in the innovation economy, which are not fungible and are actually the source of new job creation for Americans. A 2019 report by the Economic Policy Institute found that for every 100 professional, scientific and technical services jobs created in the private sector in the U.S., 418 additional, indirect jobs are created as a result. Nearly 575 additional jobs are created for every 100 information jobs, and 206 additional jobs are created for every 100 healthcare and social assistance jobs.

The DOL rule, which went into effect on October 8, 2020, significantly raises the wages employers must pay to the employees they sponsor for H-1B, H-1B1 and E-3 specialty occupation visas, H-2B visas for temporary non-agricultural workers, EB-2 advanced degree green cards, EB-2 exceptional ability green cards and EB-3 skilled worker green cards.

The new DHS rule, which further restricts H-1B visas, will go into effect on December 7, 2020. DHS will not apply the new rule to any pending or previously approved petitions. That means your company should renew your employees’ H-1B visas — if eligible — before that date.

The American Immigration Lawyers Association (AILA) has formed a task force to review the rules and help with litigation. Although both the DOL and DHS rules will likely be challenged, they will likely remain in effect for some time before any litigation has an impact. They are actively seeking plaintiffs, including employees, employers and representatives of membership organizations who will be hurt by the new rules.

Read more

Welcome back to Human Capital, where we discuss the latest in labor, diversity and inclusion in tech.

This week’s eyebrow-raising moment came Wednesday when the U.S. Department of Labor essentially accused Microsoft of reverse racism (not a real thing) for committing to hire more Black people at its predominantly white company.

And that wasn’t even the most notable news items of the week. Instead, that award goes to Uber engineer Kurt Nelson and his decision to speak out against his employer and urge folks to vote no on the Uber-sponsored ballot measure in California that aims to keep drivers classified as independent contractors. I caught up with Nelson to hear more about what brought him to the point of speaking out. You can read what he had to say further down in this newsletter.

But first, I have some of my own news to share —  Human Capital is launching in newsletter form on Friday, October 23. Sign up here so you don’t miss out.

Now, to the tea.


Stay Woke


Coinbase loses about 5% of workforce for its stance on social issues

Remember how Coinbase provided an out to employees who no longer wanted to work at the cryptocurrency company as a result of its stance on social issues? Well, Coinbase CEO Brian Armstrong said this week that about 5% of employees (60 people) have decided to take the exit package, but that there will likely be more since “a handful of other conversations” are still happening.

Armstrong noted how some people worried his stance would push out people of color and other underrepresented minorities. But in his blog post, Armstrong said those folks “have not taken the exit package in numbers disproportionate to the overall population.”

Trump’s DOL goes after Microsoft for committing to hire more Black people

Microsoft disclosed this week that the U.S. Department of Labor Office of Federal Contract Compliance Programs contacted the company regarding its racial justice and diversity commitments made in June. Microsoft had committed to double the number of Black people managers, senior individual contributors and senior leaders in its U.S. workforce by 2025. Now, however, the OFCCP says that could be considered as unlawful discrimination in violation of Title VII of the Civil Rights Act. That’s because, according to the letter, Microsoft’s commitment “appears to imply that employment action may be taken based on race.”

“We are clear that the law prohibits us from discriminating on the basis of race,” Microsoft wrote in a blog post. “We also have affirmative obligations as a company that serves the federal government to continue to increase the diversity of our workforce, and we take those obligations very seriously. We have decades of experience and know full well how to appropriately create opportunities for people without taking away opportunities from others. Furthermore, we know that we need to focus on creating more opportunity, including through specific programs designed to cast a wide net for talent for whom we can provide careers with Microsoft.”

This comes shortly after the Trump administration expanded its ban on diversity and anti-racism training to include federal contractors. While this does not fall into the scope of that ban, it’s alarming to see the DOL going after a tech company for trying to increase diversity. However, it does seem that the effects of the ban are making its way into the tech industry.

Joelle Emerson, founder and CEO of diversity training service Paradigm, says she lost her first client as a result of the executive order. While it’s not clear which client it was, many of Paradigm’s clients are tech companies.

Crunchbase report sheds light on VC funding to Black and Latinx founders

It’s widely understood that Black and Latinx founders receive not nearly as much funding as their white counterparts. Now, Crunchbase has shed some additional light on the situation. Here are some highlights from its 2020 Diversity Spotlight report.

Image Credits: Crunchbase

  • Since 2015, Black and Latinx founders have raised more than $15 billion, which represents just 2.4% of the total venture capital raised. 
  • In 2020, Black and Latinx founders have raised $2.3 billion, which represents 2.6% of all VC funding through August 31, 2020.
  • Since 2015, the top 10 leading VC firms in the U.S. have invested in around 70 startups founded by Black or Latinx people.
  • Andreessen Horowitz and Founders Fund are the two firms with the highest count of new investments in Black or Latinx-founded companies since 2015.

Gig Work


Uber engineer encourages people to vote no on Uber-backed Prop 22

Going against his employer, Uber engineer Kurt Nelson penned an op-ed on TechCrunch about why he’s voting against Prop 22. Prop 22 is a ballot measure in California that seeks to keep rideshare drivers and delivery workers classified as independent contractors. I caught up with Nelson after he published his op-ed to learn more about what brought him to the point of speaking out against Prop 22. 

“It was a combination of COVID affecting unemployment and health insurance for a bunch of people, getting close to the election and not having seen anyone who is really former Uber or Uber or former any gig companies saying anything,” Nelson told me. 

Plus, Nelson is on his way out from Uber — something that he’s been forthcoming about with his manager. He had already been feeling frustrated about the way Uber handled its rounds of layoffs this year, but the company’s push for Prop 22 was “the final nail in the coffin.”

Uber’s big arguments around why drivers should remain independent contractors is that it’s what drivers want and that it’d be costly to make them employees. Uber has said it also doesn’t see a way to offer flexibility to drivers while also employing them.

“I think it’d be really challenging,” Uber Director of Policy, Cities and Transportation Shin-pei Tsay told me at TC Sessions: Mobility this week. “We would have to start to ensure that there’s coverage to ensure that there’s the necessary number of drivers to meet demand. That would be this forecasting that needs to happen. We would only be able to offer a certain number of jobs to meet that demand because people will be working in set amounts of time. I think there would be quite fewer work opportunities, especially the ones that people really have said that they like.”

But, as Nelson notes, Silicon Valley prides itself on tackling difficult problems. 

“We’re a tech company and we solve hard problems — that’s what we do,” he said.

In response to his op-ed, Nelson said some of his co-workers have reached out to him — some thanking him for saying something. Even prior to his op-ed, Nelson said he was one of the only people who would talk about Prop 22 in any negative way in Uber’s internal Slack channels. And it’s no wonder why, given the atmosphere Uber has created around Prop 22. 

During all-hands meetings, Nelson described how the executive team wears Yes on 22 shirts or has a Yes on 22 Zoom background. Uber has also offered employees free Yes on 22 car decals and shirts, Nelson said.

As for Nelson’s next job, he knows he doesn’t “want to touch the gig economy ever again,” he said. “I know that for a fact. I’m done with the gig economy.”


Union Life


Kickstarter settles with NLRB over firing of union organizer

Kickstarter agreed to pay $36,598.63 in backpay to Taylor Moore, a former Kickstarter employee who was fired last year, Vice reported. Moore was active in organizing the company’s union, which was officially recognized earlier this year. As part of the settlement with the National Labor Relations Board, Kickstarter also agreed to post a notice to employees about the settlement on its intranet and at its physical office whenever they reopen. 

In September 2019, Kickstarter fired two people who were actively organizing a union. About a year later, the Labor Board found merit that Kickstarter unlawfully fired a union organizer.

NLRB files complaint against Google contractor HCL America

It’s been about a year since 80 Google contractors voted to form a union with U.S. Steelworkers. But those contractors, who are officially employed by HCL America, have not been able to engage in collective bargaining, according to a new complaint from the National Labor Relations Board, obtained by Vice.

The complaint states HCL has failed to bargain with the union and has even transferred the work of members of the bargaining unit to non-union members based in Poland. The NLRB alleges HCL has done that “because employees formed, joined and assisted the Union and engaged in concerted activities, and to discourage employees from engaging in these activities.”


News bites


Read more

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Read more

The long-awaited tech antitrust report that the US Congress released on October 6 presents a remarkably flimsy case for action against the nation’s most innovative and competitive companies.

The report’s main recommendations would do very little to solve real social problems caused by technology, like misinformation and election interference, because these problems aren’t related to competition. And by narrowing its focus to the technology sector, the House Antitrust Subcommittee missed an opportunity to look at parts of the economy—hospitals, insurance providers, food producers—where consolidation and competition are genuine concerns.

In the 451-page report (pdf), more than a year in the making, legislators attempted to answer a seemingly straightforward question: Are Amazon, Apple, Facebook, and Google engaging in anticompetitive practices that government agencies aren’t able to punish under current laws? And if so, what changes should Congress make?

While the report describes a few genuine cases of unfair conduct by the platforms, many of the “problems” it identifies are merely complaints from companies that have been outcompeted. But harming competitors to benefit consumers (by lowering prices, for example) is the very nature of competition.

Most important, the report does not contradict these key facts about the US tech industry: prices are falling, productivity is rising, new competitors are flourishing, employment is outperforming other sectors, and most Americans really like these companies.

Disappointingly, the much-ballyhooed document is riddled with factual errors. For example, it claims that “a decade into the future, 30% of the world’s gross economic output may lie with [Amazon, Apple, Facebook, and Google] and just a handful of others.” But the source for that statistic, a study by McKinsey, actually said that by 2025 (not 2030), revenues from all digital commerce (not just by the Big Four and a few others) might reach 30% of global revenues.

To put in perspective how misleading the report’s original claim was, consider that the combined annual revenue last year of Amazon, AppleFacebook, and Google represented only about half a percent of global economic output. Such a blatant error is conceivable only in a piece of work that first assumed its conclusion (“Big Tech is taking over the world”) and worked backward from there. There are dozens of other examples like this.

The good

Let’s start with what’s good about the report. It calls for increasing the budgets of the Federal Trade Commission (FTC) and the antitrust division of the Department of Justice, which is long overdue considering that their combined budgets have fallen by 18% (pdf), in real terms, since 2010. If regulators do not have the resources to properly enforce the laws on the books, it’s no wonder that some lawmakers will start calling for changes to those laws.

The report also recommends requiring the FTC to collect more data and report on the state of competition in various sectors. And it says the FTC should conduct retrospectives to study whether its past decisions to approve or block mergers were correct. These kinds of studies are also long overdue and would make enforcement officials better at their jobs.

The FTC is currently engaged in a special review of every acquisition by the Big Five tech companies (those listed above, plus Microsoft) over the last decade. That process should be extended to other sectors and repeated on a regular basis.

Lastly, the report’s proposals for how to increase data portability might work very well for simple forms of data (such as a user’s social graph), which are easier to standardize. If consumers can easily take their data along with them, it will be easier for them to switch to new platforms, giving startups more incentive to enter the market.

The bad

Unfortunately, the report’s primary recommendations would do far more harm than good. The signature proposal is to force dominant platforms to separate their business lines. Chairman David Cicilline, a Rhode Island Democrat, has called this a “Glass-Steagall for the internet,” referring to the 1933 US law (repealed in 1999) that divided commercial from investment banking.

In effect, this proposal would break up tech companies by separating the underlying platform from the products and services sold on it. Google could no longer own Android and offer apps like Gmail, Maps, and Chrome. Amazon could no longer own the Amazon Marketplace and sell its own private-label goods. Apple could no longer own iOS and offer products like Safari, Siri, or Find My iPhone. Facebook could no longer own social-media platforms and use personal data to target ads to users. The upshot is that these moves would destroy tech companies’ carefully constructed ecosystems and make their current business models unviable.

Of course, if this proposal is adopted, there will be many edge cases. Is the iPhone’s flashlight feature part of the operating system or is it more akin to an app? At this point, a flashlight feels like a standard feature of any phone. But not long ago, users had to download third-party apps to achieve that functionality.

As research from Wen Wen and Feng Zhu shows, when an operating system owner like Apple enters a product vertical (such as flashlight apps), third-party developers shift their efforts to other, more difficult-to-replicate app categories. So is adding a flashlight to the OS really anticompetitive behavior from a dominant platform, or is it pro-consumer innovation that leads to better allocation of developers’ time?

The consumer

To justify its proposals, the report would have needed to find a smoking gun (or two). It didn’t. In general, the leading tech companies produce enormous benefits for consumers.

In general, the leading tech companies produce enormous benefits for consumers.

Prices for digital ads have fallen by more than 40% over the last decade, and those savings flow through to consumers in the form of lower prices for goods and services. Prices for books have fallen by more than 40% since Amazon’s IPO in 1997. And Apple’s App Store takes the exact same cut (30%) as other platforms, including PlayStation, Xbox, and Nintendo. In fact, once you account for free apps, effective commission rates in the App Store are in the range of 4% to 7%.

The report’s authors massage the statistics to make tech companies look like monopolies even though they’re not by conventional measures (defined as having greater than two-thirds market share, according to the Department of Justice). They’re all very large businesses, but generally accepted data shows they don’t meet that standard. Amazon has 38% of the e-commerce market. Fewer than half of new smartphones sold in the US are iPhones. In the digital ad market, Google has a 29% share, Facebook has 23%, and Amazon has 10%.

What’s more, consumers themselves say they benefit greatly from the products and services that these companies build. Research in the Proceedings of the National Academy of Sciences has shown that, on average, consumers would need to be paid $17,530 per year to give up search engines, $8,414 per year to give up email, and $3,648 per year to give up digital maps. Meanwhile, the price to access these services is typically zero.

The competition

One of the main themes of the report is that these platforms have become so powerful no new companies dare to challenge them (and no venture capitalists dare to fund potential competitors). Several recent examples belie that notion.

Shopify, which is mentioned only in passing, is a $130 billion e-commerce company that powers more than one million online businesses. The company was founded in 2006, and the stock has risen roughly 1,000% over the last three years. Its most recent earnings report (pdf) showed that total gross merchandise volume on the platform is more than doubling year over year. (By contrast, Amazon’s GMV is growing by about 20% annually.)

To show Facebook’s dominance in the social-media market, the report includes an outdated chart (on page 93) comparing global monthly active users across the leading platforms. The chart puts TikTok at around 300 million monthly active users. But TikTok is a much more formidable competitor to Facebook than the report’s authors seem willing to admit: it recently announced that as of July, it had nearly 700 million monthly active users worldwide. On the same day the report was published, the investment bank Piper Sandler released a study showing that TikTok had surpassed Instagram as US teenagers’ second-favorite social-media app (behind Snapchat).

Zoom is another competitor that’s glossed over in the report. The subscription-based company faced an uphill battle against incumbents such as Google that offer videoconferencing for free (or bundle it with other productivity software). The report notes that in response to Zoom, Google tried to boost its own videoconferencing product, Meet, by introducing a new Meet widget inside Gmail and adding a prompt for Google Calendar users to “Add Google Meet video conferencing” to their appointments.

How have these moves affected Zoom? The company increased its number of daily meeting participants from 10 million in December 2019 to 300 million in April 2020, and its stock is now seven times higher than it was last year (reaching a market valuation of almost $140 billion).

Those aren’t just a few outliers. As Scott Kupor, a venture capitalist at Andreessen Horowitz, pointed out, startups have been booming over the last 15 years in the US. According to data (pdf) from PitchBook, the total annual number of VC deals increased from 3,390 to 12,211 between 2006 and 2019. Deal value increased from $29.4 billion to $135.8 billion. The number of deals at the earliest stage of investment—angel and seed rounds—rose by about a factor of 10 over the same time period (to 5,107 deals worth $10 billion in total value in 2019).

What’s next?

Granted, all the data presented here doesn’t rule out future antitrust cases against the tech companies. The Justice Department and some state attorneys general plan to launch an antitrust case against Google in the coming weeks. The FTC is likely to file suit against Facebook before the end of the year.

If those cases go to court, more sophisticated economic modeling based on non-public data might show that prices would have fallen even faster—or there would have been an even bigger startup boom—had the tech giants in question not been so dominant. But such an outcome would only prove that even if these companies really do harm competition, we don’t need major changes to our antitrust laws to hold them accountable.

To be sure, the scale and scope of tech platforms have created novel problems that our society needs to address, including issues related to privacy, misinformation, radicalization, counterfeit goods, child pornography, the decline of local news, and foreign interference in our elections. But instead of wasting taxpayer resources on a misguided crusade to break up our most innovative companies, Congress should consider passing measures like these:

  • Comprehensive federal privacy legislation that addresses the gaps in our current sector-based approach (and avoids the pitfalls of the EU’s General Data Protection Regulation and California’s Consumer Privacy Act).
  • Sunshine laws like the Honest Ads Act that help prevent foreign interference in future elections and make digital political ads more transparent.
  • Reform for the intellectual-property dispute process to reduce the prevalence of counterfeit goods online and prevent tech giants from copying genuinely innovative products.
  • Direct subsidies for the provision of local news, funded via broad-based taxes.

Unfortunately, changing our antitrust laws as the House Judiciary Committee recommends would fix none of the social issues caused by Big Tech. Each problem needs a targeted regulatory solution, not the big stick approach of “break them up.”

Alec Stapp is the director of technology policy at the Progressive Policy Institute, a center-left think tank based in Washington, DC.

Read more

In a national database in Argentina, tens of thousands of entries detail the names, birthdays, and national IDs of people suspected of crimes. The database, known as the Consulta Nacional de Rebeldías y Capturas (National Register of Fugitives and Arrests), or CONARC, began in 2009 as a part of an effort to improve law enforcement for serious crimes.

But there are several things off about CONARC. For one, it’s a plain-text spreadsheet file without password protection, which can be readily found via Google Search and downloaded by anyone. For another, many of the alleged crimes, like petty theft, are not that serious—while others aren’t specified at all.

Most alarming, however, is the age of the youngest alleged offender, identified only as M.G., who is cited for “crimes against persons (malicious)—serious injuries.” M.G. was apparently born on October 17, 2016, which means he’s a week shy of four years old.

Now a new investigation from Human Rights Watch has found that not only are children regularly added to CONARC, but the database also powers a live facial recognition system in Buenos Aires deployed by the city government. This makes the system likely the first known instance of its kind being used to hunt down kids suspected of criminal activity.

“It’s completely outrageous,” says Hye Jung Han, a children’s rights advocate at Human Rights Watch, who led the research.

Buenos Aires first began trialing live facial recognition on April 24, 2019. Implemented without any public consultation, the system sparked immediate resistance. In October, a national civil rights organization filed a lawsuit to challenge it. In response, the government drafted a new bill—now going through legislative processes—that would legalize facial recognition in public spaces.

The system was designed to link to CONARC from the beginning. While CONARC itself doesn’t contain any photos of its alleged offenders, it’s combined with photo IDs from the national registry. The software uses suspects’ headshots to scan for real-time matches via the city’s subway cameras. Once the system flags a person, it alerts to the police to make an arrest.

The system has since led to numerous false arrests (links in Spanish), which the police have no established protocol for handling. One man who was mistakenly identified was detained for six days and about to be transferred to a maximum security prison before he finally cleared up his identity. Another was told he should expect to be repeatedly flagged in the future even though he’d proved he wasn’t who the police were looking for. To help resolve the confusion, they gave him a pass to show to the next officer that might stop him.

“There seems to be no mechanism to be able to correct mistakes in either the algorithm or the database,” Han says. “That is a signal to us that here’s a government that has procured a technology that it doesn’t understand very well in terms of all the technical and human rights implications.”

All this is already deeply concerning, but adding children to the equation makes matters that much worse. Though the government has publicly denied (link in Spanish) that CONARC includes minors, Human Rights Watch found at least 166 children listed in various versions of the database between May 2017 and May 2020. Unlike M.G., most of them are identified by full name, which is illegal. Under international human rights law, children accused of a crime must have their privacy protected throughout the proceedings.

Also unlike M.G., most were 16 or 17 at time of entry—though, mysteriously, there have been a few one- to three-years-olds. The ages aren’t the only apparent errors in the children’s entries. There are blatant typos, conflicting details, and sometimes multiple national IDs listed for the same individual. Because kids also physically change faster than adults, their photo IDs are more at risk of being outdated.

On top of this, facial recognition systems, under even ideal laboratory conditions, are notoriously bad at handling children because they’re trained and tested primarily on adults. The Buenos Aires system is no different. According to official documents (link in Spanish), it was tested only on the adult faces of city government employees before procurement. Prior US government tests of the specific algorithm that it is believed to be using also suggest it performs worse by a factor of six on kids (ages 10 to 16) than adults (ages 24 to 40).

All these factors put kids at a heightened risk for being misidentified and falsely arrested. This could create an unwarranted criminal record, with potentially long-lasting repercussions for their education and employment opportunities. It might also have an impact on their behavior.

“The argument that facial recognition produces a chilling effect on the freedom of expression is more amplified for kids,” says Han. “You can just imagine a child [who has been falsely arrested] would be extremely self-censoring or careful about how they behave in public. And it’s still early to try and figure out the long-term psychological impacts—how it might shape their world view and mindset as well.”

While Buenos Aires is the first city Han has identified using live facial recognition to track kids, she worries that many other examples are hidden from view. In January, London announced that it would integrate live facial recognition into its policing operations. Within days, Moscow said it had rolled out a similar system across the city.

Though it’s not yet known whether these systems are actively trying to match children, kids are already being affected. In the 2020 documentary Coded Bias, a boy is falsely detained by the London police after live facial recognition mistakes him for someone else. It’s unclear whether the police were indeed looking for a minor or someone older.

Even those who are not detained are losing their right to privacy, says Han: “There’s all the kids who are passing in front of a facial-recognition-enabled camera just to access the subway system.”

It’s often easy to forget in debates about these systems that children need special consideration. But that’s not the only reason for concern, Han adds. “The fact that these kids would be under that kind of invasive surveillance—the full human rights and societal implications of this technology are still unknown.” Put another way: what’s bad for kids is ultimately bad for everyone.

Read more

In 2019, two multimedia artists, Francesca Panetta and Halsey Burgund, set about to pursue a provocative idea. Deepfake video and audio had been advancing in parallel but had yet to be integrated into a complete experience. Could they do it in a way that demonstrated the technology’s full potential while educating people about how it could be abused?

To bring the experiment to life, they chose an equally provocative subject: they would create an alternative history of the 1969 Apollo moon landing. Before the launch, US president Richard Nixon’s speechwriters had prepared two versions of his national address—one designated “In Event of Moon Disaster,” in case things didn’t go as planned. The real Nixon, fortunately, never had to deliver it. But a deepfake Nixon could.

So Panetta, the creative director at MIT’s Center for Virtuality, and Burgund, a fellow at the MIT Open Documentary Lab, partnered up with two AI companies. Canny AI would handle the deepfake video, and Respeecher would prepare the deepfake audio. With all the technical components in place, they just needed one last thing: an actor who would supply the performance.

“We needed to find somebody who was willing to do this, because it’s a little bit of a weird ask,” Burgund says. “Somebody who was more flexible in their thinking about what an actor is and does.”

While deepfakes have now been around for a number of years, deepfake casting and acting are relatively new. Early deepfake technologies weren’t very good, used primarily in dark corners of the internet to swap celebrities into porn videos without their consent. But as deepfakes have grown increasingly realistic, more and more artists and filmmakers have begun using them in broadcast-quality productions and TV ads. This means hiring real actors for one aspect of the performance or another. Some jobs require an actor to provide “base” footage; others need a voice.

For actors, it opens up exciting creative and professional possibilities. But it also raises a host of ethical questions. “This is so new that there’s no real process or anything like that,” Burgund says. “I mean, we were just sort of making things up and flailing about.”

“Want to become Nixon?”

The first thing Panetta and Burgund did was ask both companies what kind of actor they needed to make the deepfakes work. “It was interesting not only what were the important criteria but also what weren’t,” Burgund says.

For the visuals, Canny AI specializes in video dialogue replacement, which uses an actor’s mouth movements to manipulate someone else’s mouth in existing footage. The actor, in other words, serves as a puppeteer, never to be seen in the final product. The person’s appearance, gender, age, and ethnicity don’t really matter.

But for the audio, Respeecher, which transmutes one voice into another, said it’d be easier to work with an actor who had a similar register and accent to Nixon’s. Armed with that knowledge, Panetta and Burgund began posting on various acting forums and emailing local acting groups. Their pitch: “Want to become Nixon?”

Actor Lewis D. Wheeler spent days in the studio training the deepfake algorithms to map his voice and face to Nixon’s.
PANETTA AND BURGUND

This is how Lewis D. Wheeler, a Boston-based white male actor, found himself holed up in a studio for days listening to and repeating snippets of Nixon’s audio. There were hundreds of snippets, each only a few seconds long, “some of which weren’t even complete words,” he says.

The snippets had been taken from various Nixon speeches, much of it from his resignation. Given the grave nature of the moon disaster speech, Respeecher needed training materials that captured the same somber tone.

Wheeler’s job was to re-record each snippet in his own voice, matching the exact rhythm and intonation. These little bits were then fed into Respeecher’s algorithm to map his voice to Nixon’s. “It was pretty exhausting and pretty painstaking,” he says, “but really interesting, too, building it brick by brick.”

The final deepfake of Nixon giving the speech “In Event of Moon Disaster.”
PANETTA AND BURGUND

The visual part of the deepfake was much more straightforward. In the archival footage that would be manipulated, Nixon had delivered the real moon landing address squarely facing the camera. Wheeler needed only to deliver its alternate, start to finish, in the same way, for the production crew to capture his mouth movements at the right angle.

This is where, as an actor, he started to find things more familiar. Ultimately his performance would be the one part of him that would make it into the final deepfake. “That was the most challenging and most rewarding,” he says. “For that, I had to really get into the mindset of, okay, what is this speech about? How do you tell the American people that this tragedy has happened?”

“How do we feel?”

On the face of it, Zach Math, a film producer and director, was working on a similar project. He’d been hired by Mischief USA, a creative agency, to direct a pair of ads for a voting rights campaign. The ads would feature deepfaked versions of North Korean leader Kim Jong-un and Russian president Vladimir Putin. But he ended up in the middle of something very different from Panetta and Burgund’s experiment.

In consultation with a deepfake artist, John Lee, the team had chosen to go the face-swapping route with the open-source software DeepFaceLab. It meant the final ad would include the actors’ bodies, so they needed to cast believable body doubles.

The ad would also include the actors’ real voices, adding an additional casting consideration. The team wanted the deepfake leaders to speak in English, though with authentic North Korean and Russian accents. So the casting director went hunting for male actors who resembled each leader in build and facial structure, matched their ethnicity, and could do convincing voice impersonations.

The process of training DeepFaceLab to generate Kim Jong-un’s face.
MISCHIEF USA

For Putin, the casting process was relatively easy. There’s an abundance of available footage of Putin delivering various speeches, providing the algorithm with plenty of training data to deepfake his face making a range of expressions. Consequently, there was more flexibility in what the actor could look like, because the deepfake could do most of the work.

But for Kim, most of the videos available showed him wearing glasses, which obscured his face and caused the algorithm to break down. Narrowing the training footage to only the videos without glasses left far fewer training samples to learn from. The resulting deepfake still looked like Kim, but his face movements looked less natural. Face-swapped onto an actor, it muted the actor’s expressions.

To counteract that, the team began running all of the actors’ casting tapes through DeepFaceLab to see which one came out looking the most convincing. To their surprise, the winner looked least like Kim physically but had the most expressive performance.

The actor chosen to play Kim Jong-un had the least physical resemblance to the dictator but the most expressive performance.

To address the aspects of Kim’s appearance that the deepfake couldn’t replicate, the team relied on makeup, costumes, and post-production work. The actor was slimmer than Kim, for example, so they had him wear a fat suit.

When it came down to judging the quality of the deepfake, Math says, it was less about the visual details and more about the experience. “It was never ‘Does that ear look weird?’ I mean, there were those discussions,” he says. “But it was always like, ‘Sit back—how do we feel?’”

“They were effectively acting as a human shield”

In some ways, there’s little difference between deepfake acting and CGI acting, or perhaps voice acting for a cartoon. Your likeness doesn’t make it into the final production, but the result still has your signature and interpretation. But deepfake casting can also go the other direction, with an person’s face swapped into someone else’s performance.

Making this type of fake persuasive was the task of Ryan Laney, a visual effects artist who worked on the 2020 HBO documentary Welcome to Chechnya. The film follows activists who risk their lives to fight the persecution of LGBTQ individuals in the Russian republic. Many of them live in secrecy for fear of torture and execution.

In order to tell their stories, director David France promised to protect their identities, but he wanted to do so without losing their humanity. After testing out numerous solutions, his team finally landed on deepfakes. He partnered with Laney, who developed an algorithm that overlaid one face onto another while retaining the latter’s expressions.

Left: a photo grid of Maxim shot at many angles. Right: a photo grid of his deepfake cover shot at many angles.
Left: Maxim Lapunov, the lead character in the documentary who goes public halfway through the film. Right: a Latino LGBTQ activist who volunteered to be Maxim’s shield.
TEUS MEDIA

The casting process was thus a search not for performers but for 23 people who would be willing to lend their faces. France ultimately asked LGBTQ activists to volunteer as “covers.” “He came at it from not who is the best actor, but who are the people interested in the cause,” Laney says, “because they were effectively acting as a human shield.”

The team scouted the activists through events and Instagram posts, based on their appearance. Each cover face needed to look sufficiently different from the person being masked while also aligning in certain characteristics. Facial hair, jawlines, and nose length needed to roughly match, for example, and each pair had to be approximately the same age for the cover person’s face to look natural on the original subject’s body.

Left: Maxim’s unmasked face. Right: Maxim with his deepfake cover.
TEUS MEDIA

The team didn’t always match ethnicity or gender, however. The lead character, Maxim Lapunov, who is white, was shielded by a Latino activist, and a female character was shielded by an activist who is gender nonconforming.

Throughout the process, France and Laney made sure to get fully informed consent from all parties. “The subjects of the film actually got to look at the work before David released it,” Laney says. “Everybody got to sign off on their own cover to make sure they felt comfortable.”

“It just gets people thinking”

While professionalized deepfakes have pushed the boundaries of art and creativity, their existence also raises tricky ethical questions. There are currently no real guidelines on how to label deepfakes, for example, or where the line falls between satire and misinformation.

For now, artists and filmmakers rely on a personal sense of right and wrong. France and Laney, for example, added a disclaimer to the start of the documentary stating that some characters had been “digitally disguised” for their protection. They also added soft edges to the masked individuals to differentiate them. “We didn’t want to hide somebody without telling the audience,” Laney says.

Stephanie Lepp, an artist and producer who creates deepfakes for political commentary, similarly marks her videos upfront to make clear they are fake. In her series Deep Reckonings, which imagines powerful figures like Mark Zuckerberg apologizing for their actions, she also used voice actors rather than deepfake audio to further distinguish the project as satirical and not deceptive.

Other projects have been more coy, such as those of Barnaby Francis, an artist-activist who works under the pseudonym Bill Posters. Over the years, Francis has deepfaked politicians like Boris Johnson and celebrities like Kim Kardashian, all in the name of education and satire. Some of the videos, however, are only labeled externally—for example, in the caption when Francis posts them on Instagram. Pulled out of that context, they risk blurring art and reality, which has sometimes led him into dicey territory.

View this post on Instagram

Today I’ve release a new series of #deepfake artworks with @futureadvocacy to raise awareness to the lack of regulation concerning misinformation online. These ‘partly political’ broadcasts see the UK Prime Minister Boris Johnson and Leader of the Opposition Jeremy Corbyn deep faked to send a warning to all governments regarding disinformation online. For this intervention, we’ve used the biometric data of famous UK politicians to challenge the fact that without greater controls and protections concerning personal data and powerful new technologies, misinformation poses a direct risk to everyone’s human rights including the rights of those in positions of power. It’s staggering that after 3 years, the recommendations from the DCMS Select Committee enquiry into fake news or the Information Commissioner’s Office enquiry into the Cambridge Analytica scandals have not been applied to change UK laws to protect our liberty and democracy. As a result, the conditions for computational forms of propaganda and misinformation campaigns to be amplified by social media platforms are still in effect today. We’re calling on all UK political parties to apply parliaments own findings and safeguard future elections. Despite endless warnings over the past few years, politicians have collectively failed to address the issue of disinformation online. Instead the response has been to defer to tech companies to do more. The responsibility for protecting our democracy lies in the corridors of Westminster not the boardrooms of Silicon Valley. See the full videos on my website! [LINK IN BIO] #deepfakes #newmediaart #ukelection #misinformation

A post shared by Bill Posters (@bill_posters_uk) on

There are also few rules around whose images and speech can be manipulated—and few protections for actors behind the scenes. Thus far, most professionalized deepfakes have been based on famous people and made with clear, constructive goals, so they are legally protected in the US under satire laws. In the case of Mischief’s Putin and Kim deepfakes, however, the actors have remained anonymous for “personal security reasons,” the team said, because of the controversial nature of manipulating the images of dictators.

Knowing how amateur deepfakes have been used to abuse, manipulate, and harass women, some creators are also worried about the direction things could go. “There’s a lot of people getting onto the bandwagon who are not really ethically or morally bothered about who their clients are, where this may appear, and in what form,” Francis says.

Despite these tough questions, however, many artists and filmmakers firmly believe deepfakes should be here to stay. Used ethically, the technology expands the possibilities of art and critique, provocation and persuasion. “It just gets people thinking,” Francis says. “It’s the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.”

Read more

Want to develop a loyal YouTube following? Wondering how to better connect with an audience on YouTube? To explore how to grow and develop a loyal fan base on YouTube, I interview Cathrin Manning on the Social Media Marketing Podcast. Cathrin is a YouTube expert who teaches small YouTubers how to grow using the platform. […]

The post Growing on YouTube: How to Develop a Loyal Following appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,454 2,455 2,456 2,457 2,458 2,478