Ice Lounge Media

Ice Lounge Media

An aspect of video calls that many of us take for granted is the way they can switch between feeds to highlight whoever’s speaking. Great — if speaking is how you communicate. Silent speech like sign language doesn’t trigger those algorithms, unfortunately, but this research from Google might change that.

It’s a real-time sign language detection engine that can tell when someone is signing (as opposed to just moving around) and when they’re done. Of course it’s trivial for humans to tell this sort of thing, but it’s harder for a video call system that’s used to just pushing pixels.

A new paper from Google researchers, presented (virtually, of course) at ECCV, shows how it can be done efficiency and with very little latency. It would defeat the point if the sign language detection worked but it resulted in delayed or degraded video, so their goal was to make sure the model was both lightweight and reliable.

The system first runs the video through a model called PoseNet, which estimates the positions of the body and limbs in each frame. This simplified visual information (essentially a stick figure) is sent to a model trained on pose data from video of people using German Sign Language, and it compares the live image to what it thinks signing looks like.

Image showing automatic detection of a person signing.

Image Credits: Google

This simple process already produces 80 percent accuracy in predicting whether a person is signing or not, and with some additional optimizing gets up to 91.5 percent accuracy. Considering how the “active speaker” detection on most calls is only so-so at telling whether a person is talking or coughing, those numbers are pretty respectable.

In order to work without adding some new “a person is signing” signal to existing calls, the system pulls clever a little trick. It uses a virtual audio source to generate a 20 kHz tone, which is outside the range of human hearing, but noticed by computer audio systems. This signal is generated whenever the person is signing, making the speech detection algorithms think that they are speaking out loud.

Right now it’s just a demo, which you can try here, but there doesn’t seem to be any reason why it couldn’t be built right into existing video call systems or even as an app that piggybacks on them. You can read the full paper here.

Read more

A day after the Senate Commerce Committee moved forward with plans to subpoena the CEOs of Twitter, Facebook and Google, it looks like some of the most powerful leaders in tech will testify willingly.

Twitter announced late Friday that Jack Dorsey would appear virtually before the committee on October 28, just days before the U.S. election. While Twitter is the only company that’s openly agreed to the hearing so far, Politico reports that Sundar Pichai and Mark Zuckerberg also plan to appear.

Members of both parties on the committee planned to use the hearings to examine Section 230, the key legal shield that protects online platforms from liability from the content their users create.

As we’ve discussed previously, the political parties are approaching Section 230 from very different perspectives. Democrats see threatening changes to Section 230 as a way to force platforms to take more seriously toxic content like misinformation and harassment.

Many Republicans believe tech companies should be stripped of Section 230 protections because platforms have an anti-conservative bias — a claim that the facts don’t bear out.

Twitter had some choice words about that perspective, calling claims of political bias an “unsubstantiated allegation that we have refuted on many occasions to Congress,” and noting that those accusations have been “widely disproven” by researchers.

“We do not enforce our policies on the basis of political ideology,” the company added.

It sounds like the company and members of the Senate have very different agendas. Twitter indicated that it plans to use the hearing’s timing to steer the conversation toward the election. Politico also reports that the scope of the hearing will be broadened to include “data privacy and media consolidation” — not just Section 230.

A spokesperson tweeting on the company’s public policy account insisted that the hearing “must be constructive,” addressing how tech companies can protect the integrity of the vote.

“At this critical time, we’re committed to keeping our focus squarely on what matters the most to our company: joint efforts to protect our shared democratic conversation from harm — from both foreign and domestic threats,” a Twitter spokesperson wrote.

Regardless of the approach, dismantling Section 230 could prove potentially catastrophic for the way the internet as we know it works, so the stakes are high, both for tech companies and for regular internet users.

Read more

President Donald Trump’s positive COVID-19 result has made Twitter a busy place in the past 24 hours, including some tweets that have publicly wished — some subtly and others more directly — that he die from the disease caused by coronavirus.

Twitter put out a reminder to folks that it doesn’t allow tweets that wish or hope for death or serious bodily harm or fatal disease against anyone. Tweets that violate this policy will need to be removed, Twitter said Friday. However, it also clarified that this does not automatically mean suspension. Several news outlets misreported that users would be suspended automatically. Of course, that doesn’t mean users won’t be suspended.

Motherboard reported that users would be suspended, citing a statement from Twitter. That runs slightly counter to Twitter’s public statement on its own platform.

On Thursday evening, Trump tweeted that he and his wife, First Lady Melania Trump, had tested positive for COVID-19. White House physician Sean Conley issued a memo Friday confirming the positive results of SAR-Cov-2 virus, which often is more commonly known as COVID-19. Trump was seen boarding a helicopter Friday evening that was bound for Walter Reed Medical Center for several days of treatment.

The diagnosis sent shares tumbling Friday on the key exchanges, including Nasdaq. The news put downward pressure on all major American indices, but heaviest on tech shares.

Read more

Kindred Capital, the London-based VC that backs early-stage founders in Europe and Israel, recently closed its second seed fund at £81 million.

Out if its first fund raised in 2018, the firm has backed 29 companies. They include Five, which is building software for autonomous vehicles; Paddle, SaaS for software e-commerce; Pollen, a peer-to-peer marketplace for experiences and travel; and Farewill, which lets users create a will online.

However, what sets Kindred apart from most other seed VCs is its “equitable venture” model that sees the founders it backs get carry in the fund, effectively becoming co-owners of Kindred. Once the VC’s LPs have their investment returned, along with the firm’s partners, the portfolio founders share any subsequent fund profits.

To learn more about Kindred’s investment focus going forward and how its equitable venture model works in practice, I caught up with partners Leila Rastegar Zegna and Chrys Chrysanthou. We also discussed closing deals remotely and how the VC approaches diversity and inclusion.

TechCrunch: Kindred Capital backs seed-stage startups across Europe and in Israel. Can you elaborate a bit more on the fund’s remit, such as sector or specific technologies, and what you look for in founders and startups at such an early stage?

Rastegar Zegna: As a fund, we are very focused on the founder(s), so everything starts there. We try to drill down and get to know them as people and leaders, first and foremost. Do they have what it takes to get the company off the ground, the resilience to get through the inevitable ups and downs of startup life and through the scaling years to make this a massive outcome for the team and the investors?

The second element we spend time thinking about is the market itself and how big the company can grow within the constraints of that market. We also think deeply about the timing of the business, especially if they are trying to create a new market, such as in quantum computing, for example.

Chrysanthou: It’s also worth mentioning that many investors talk about product-market fit, but we are also great believers in founder-market fit. In other words, a founder who might be successful in one market, might well fail in another, as different skills are required and even different personality types might be better suited. One way we assess this is to look for deep insights they have to the problem they’re trying to solve and how they think about their market.

After that, we are fairly sector-agnostic, which is why we have such a diverse portfolio, ranging from consumer products through to deep science.

How has the coronavirus pandemic and resulting lockdowns and social distancing affected the way you source and close deals?

Rastegar Zegna: Initially, we moved everything to video calls, like pretty much everyone else in the industry. Upon reflection, however, we realized that we were just using a new tool (e.g. Zoom) but in the old way — meaning, any meeting we used to have at Kindred HQ, we just transitioned onto Zoom. The interesting transition we’re going through now is to create a new way of working around the tool. That means for some meetings, Zoom will be the most effective medium of communication. For others it may be an audio call, and for a third category of discussion, a walking meeting in the park may be what’s called for. But the opportunity is to throw out the playbook written by inertia and generally accepted industry working norms, and create a first principles approach to the way in which we do business to optimize for the best outcome.

Read more

Twitter addresses questions of bias in its image-cropping algorithms, we take a look at Mario Kart Live and the stock market takes a hit after President Trump’s COVID-19 diagnosis. This is your Daily Crunch for October 2, 2020.

The big story: Twitter confronts image-cropping concerns

Last month, (white) PhD student Colin Madland highlighted potential algorithmic bias on Twitter and Zoom — in Twitter’s case, because its automatic image cropping seemed to consistently highlight Madland’s face over that of a Black colleague.

Today, Twitter said it has been looking into the issue: “While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm.”

Does that mean it will stop automatically cropping images? The company said it’s “exploring different options” and added, “We hope that giving people more choices for image cropping and previewing what they’ll look like in the tweet composer may help reduce the risk of harm.”

The tech giants

Nintendo’s new RC Mario Kart looks terrific — Mario Kart Live (with a real-world race car) makes for one hell of an impressive demo.

Tesla delivers 139,300 vehicles in Q3, beating expectations — Tesla’s numbers in the third quarter marked a 43% improvement from the same period last year.

Zynga completes its acquisition of hyper-casual game maker Rollic — CEO Frank Gibeau told me that this represents Zynga’s first move into the world of hyper-casual games.

Startups, funding and venture capital

Elon Musk says an update for SpaceX’s Starship spacecraft development program is coming in 3 weeks —  Starship is a next-generation, fully reusable spacecraft that the company is developing with the aim of replacing all of its launch vehicles.

Paired picks up $1M funding and launches its relationship app for couples — Paired combines audio tips from experts with “fun daily questions and quizzes” that partners answer together.

With $2.7M in fresh funding, Sora hopes to bring virtual high school to the mainstream — Long before the coronavirus, Sora was toying with the idea of live, virtual high school.

Advice and analysis from Extra Crunch

Spain’s startup ecosystem: 9 investors on remote work, green shoots and 2020 trends — While main hubs Madrid and Barcelona bump heads politically, tech ecosystems in each city have been developing with local support.

Which neobanks will rise or fall? — Neobanks have led the $3.6 billion in venture capital funding for consumer fintech startups this year.

Asana’s strong direct listing lights alternative path to public market for SaaS startups — Despite rising cash burn and losses, Wall Street welcomed the productivity company.

Everything else

American stocks drop in wake of president’s COVID-19 diagnosis — The news is weighing heavily on all major American indices, but heaviest on tech shares.

Digital vote-by-mail applications in most states are inaccessible to people with disabilities — According to an audit by Deque, most states don’t actually have an accessible digital application.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read more

The president of the United States, Donald Trump, tested positive for covid-19 and within 24 hours had received an experimental, cutting-edge antibody treatment not available to other Americans.

In a statement released Friday, the White House said Trump had received “a single 8-gram dose” of the biotech treatment, which belongs to a promising new class of antiviral drugs.

The president “remains fatigued but in good spirits” after getting the emergency infusion, according to White House doctor Sean Conley. “He’s being evaluated by a team of experts, and together we’ll be making recommendations to the president and first lady in regards to best next steps.”

Trump and his wife, who also tested positive, were certain to have access to the best medical care the country can provide, including experimental drugs not available to others.

The White House said the president had received an IV infusion of a cocktail of antibodies manufactured by Regeneron Pharmaceuticals, of Tarrytown, New York. That treatment is essentially a way to mimic a powerful immune response in order to ward off a serious case of covid-19.

Because he is overweight and 74 years old, the president is at higher than average risk for developing a serious case of the disease. And the chance of death for someone like him is not small—it’s at least 5%, about 100 times as great for him as for someone under 30.

Trump’s doctors immediately had to make some tough decisions in deciding what drugs to give him. For one thing, they had to assess medical evidence that’s been consistently clouded by the White House itself and treat a patient who has shown interest in hokum treatments like bleach, second-guessed medical authorities, and even given a bullhorn to a doctor who believes in witchcraft.

The president has “mild symptoms,” according to his chief of staff, Mark Meadows. It usually takes several days before more serious covid-19 symptoms manifest, if they do. As long Trump isn’t in the hospital,  he would be classified as a lower-risk “outpatient.”

For nearly any other American, that would mean being told to wait and see how the symptoms develop, because there aren’t any covid-19 drugs approved for outpatients. Two treatments, blood plasma and the antiviral drug remdesivir, did receive emergency approval, but only for people who are hospitalized.

But Trump isn’t just anyone. So expect his doctors to consider—and maybe get hold of—experimental drugs that have shown promise, even if they have received no approval yet. The same could go for Melania Trump and other members of the inner circle who tested positive.

Drug company analysts at Raymond James early Friday put out a note to clients rating what experimental treatments they thought Trump was “most likely” to get. At the front of their list: the antibody drug manufactured by Regeneron, which is still being studied.

The stock analysts were exactly right. By Friday afternoon, the White House issued a statement saying that the president had already received the treatment.

The antibodies Regeneron makes are similar to those developed by people who catch the virus and survive it. Given in a concentrated dose administered through an intravenous drip, the manufactured antibodies are designed to grab hold of the viral particles and neutralize them.

The expectation for such treatments is that if given early to patients like Trump, they might stop the disease from ever progressing to its most serious stage of pneumonia and death.

Just this week, Regeneron said a study of nearly 300 outpatients showed that the antibody treatment cut down on the amount of virus in patients’ bodies. There were also hints that those who got the drug were less likely to end up in a doctor’s office, making it one of the most exciting new candidate treatments. (A similar antibody is being made and tested by Eli Lilly.)

When contacted by MIT Technology Review early on Friday, Regeneron was not willing to say whether the White House had already asked for doses of its antibody, REGN-COV2. “As a matter of policy, we don’t identify individuals without their consent who have or have not submitted a request,” spokesperson Alexandra Bowie said in an email.

Although Regeneron’s drug is not approved, many companies run “compassionate use” programs that can allow people who are not in their trials to get the treatment in special cases, and that’s apparently exactly what Trump qualified for.

“There is limited product available for compassionate-use requests that are approved under certain exceptional circumstances on a case-by-case basis,” Bowie said. The US Food and Drug Administration would have had to rapidly sign off on Trump’s treatment request too.

Regeneron declined to explain the series of events that led to Trump’s treatment, but a presidential request would not have been easy to turn down. Trump also appears to have a warm relationship with the New York company’s billionaire CEO and cofounder, Leonard Schleifer, who back in March was among a select group of executives brought to the White House for a meeting about how biotech might solve the crisis with drugs or vaccines.

What’s certain is that any company whose drug Trump takes could get a massive boost of publicity, perhaps even from the presidential Twitter feed. Today’s events could also accelerate an emergency approval for Regeneron’s drug, which would make it available to more people.

Another drug doctors will have to consider for Trump is remdesivir, made by Gilead Pharmaceuticals. It’s never been shown to benefit just-diagnosed patients, like Trump, and is approved only for those who are hospitalized. But in the case of a sitting president, doctors might have to judge the risks and benefits differently.

And don’t forget that Trump will have a say in his treatment. That’s a wild card, because he has a pattern of taking medical advice from partisan sources rather than medical ones. For instance, he announced in May that he was taking hydroxychloroquine, an antimalarial then touted by conservative personalities including Rudy Giuliani as a covid-19 cure-all.

His doctor, the military osteopath Sean Conley, later confirmed Trump had taken the pills, even though most health experts say the drug doesn’t prevent infection or cure it.

The same doctor, in a memo, assured the American people that Trump would beat all the medical odds and sail through his bout with the coronavirus. In a short statement, in which he confirmed the diagnosis of the president and the first lady, Conley said, “Rest assured, I expect the president to continue carrying out his duties without disruption while recovering.”

No one can say if that will actually happen. But the fast decision to give Trump the antibody made by Regeneron could be the best shot the president had.

Read more

In November of 2018, a new deep-learning tool went online in the emergency department of the Duke University Health System. Called Sepsis Watch, it was designed to help doctors spot early signs of one of the leading causes of hospital deaths globally.

Sepsis occurs when an infection triggers full-body inflammation and ultimately causes organs to shut down. It can be treated if diagnosed early enough, but that’s a notoriously hard task because its symptoms are easily mistaken for signs of something else.

Sepsis Watch promised to change that. The product of three and a half years of development (which included digitizing health records, analyzing 32 million data points, and designing a simple interface in the form of an iPad app), it scores patients on an hourly basis for their likelihood of developing the condition. It then flags those who are medium or high risk and those who already meet the criteria. Once a doctor confirms the diagnosis, the patients get immediate attention.

In the two years since the tool’s introduction, anecdotal evidence from Duke Health’s hospital managers and clinicians has suggested that Sepsis Watch really works. It has dramatically reduced sepsis-induced patient deaths and is now part of a federally registered clinical trial expected to share its results in 2021.

At first glance, this is an example of a major technical victory. Through careful development and testing, an AI model successfully augmented doctors’ ability to diagnose disease. But a new report from the Data & Society research institute says this is only half the story. The other half is the amount of skilled social labor that the clinicians leading the project needed to perform in order to integrate the tool into their daily workflows. This included not only designing new communication protocols and creating new training materials but also navigating workplace politics and power dynamics.

The case study is an honest reflection of what it really takes for AI tools to succeed in the real world. “It was really complex,” says coauhtor Madeleine Clare Elish, a cultural anthropologist who examines the impact of AI.

Repairing innovation

Innovation is supposed to be disruptive. It shakes up old ways of doing things to achieve better outcomes. But rarely in conversations about technological disruption is there an acknowledgment that disruption is also a form of “breakage.” Existing protocols turn obsolete; social hierarchies get scrambled. Making the innovations work within existing systems requires what Elish and her coauthor Elizabeth Anne Watkins call “repair work.”

During the researchers’ two-year study of Sepsis Watch at Duke Health, they documented numerous examples of this disruption and repair. One major issue was the way the tool challenged the medical world’s deeply ingrained power dynamics between doctors and nurses.

In the early stages of tool design, it became clear that rapid response team (RRT) nurses would need to be the primary users. Though attending physicians are typically in charge of evaluating patients and making sepsis diagnoses, they don’t have time to continuously monitor another app on top of their existing duties in the emergency department. In contrast, the main responsibility of an RRT nurse is to continuously monitor patient well-being and provide extra assistance where needed. Checking the Sepsis Watch app fitted naturally into their workflow.

But here came the challenge. Once the app flagged a patient as high risk, a nurse would need to call the attending physician (known in medical speak as “ED attendings”). Not only did these nurses and attendings often have no prior relationship because they spent their days in entirely different sections of the hospital, but the protocol represented a complete reversal of the typical chain of command in any hospital. “Are you kidding me?” one nurse recalled thinking after learning how things would work. “We are going to call ED attendings?”

But this was indeed the best solution. So the project team went about repairing the “disruption” in various big and small ways. The head nurses hosted informal pizza parties to build excitement and trust about Sepsis Watch among their fellow nurses. They also developed communication tactics to smooth over their calls with the attendings. For example, they decided to make only one call per day to discuss multiple high-risk patients at once, timed for when the physicians were least busy.

On top of that, the project leads began regularly reporting the impact of Sepsis Watch to the clinical leadership. The project team discovered that not every hospital staffer believed sepsis-induced death was a problem at Duke Health. Doctors, especially, who didn’t have a bird’s-eye view of the hospital’s statistics, were far more occupied with the emergencies they were dealing with day to day, like broken bones and severe mental illness. As a result, some found Sepsis Watch a nuisance. But for the clinical leadership, sepsis was a huge priority, and the more they saw Sepsis Watch working, the more they helped grease the gears of the operation.

Changing norms

Elish identifies two main factors that ultimately helped Sepsis Watch succeed. First, the tool was adapted for a hyper-local, hyper-specific context: it was developed for the emergency department at Duke Health and nowhere else. “This really bespoke development was key to the success,” she says. This flies in the face of typical AI norms. 

Second, throughout the development process, the team regularly sought feedback from nurses, doctors, and other staff up and down the hospital hierarchy. This not only made the tool more user friendly but also cultivated a small group of committed staff members to help champion its success. It also made a difference that the project was led by Duke Health’s own clinicians, says Elish, rather than by technologists who had parachuted in from a software company. “If you don’t have an explainable algorithm,” she says, “you need to build trust in other ways.”

These lessons are very familiar to Marzyeh Ghassemi, an incoming assistant professor at MIT who studies machine-learning applications for health care. “All machine-learning systems that are ever intended to be evaluated on or used by humans must have socio-technical constraints at front of mind,” she says. Especially in clinical settings, which are run by human decision makers and involve caring for humans at their most vulnerable, “the constraints that people need to be aware of are really human and logistical constraints,” she adds.

Elish hopes her case study of Sepsis Watch convinces researchers to rethink how to approach medical AI research and AI development at large. So much of the work being done right now focuses on “what AI might be or could do in theory,” she says. “There’s too little information about what actually happens on the ground.” But for AI to live up to its promise, people need to think as much about social integration as technical development.

Her work also raises serious questions. “Responsible AI must require attention to local and specific context,” she says. “My reading and training teaches me you can’t just develop one thing in one place and then roll it out somewhere else.”

“So the challenge is actually to figure how we keep that local specificity while trying to work at scale,” she adds. That’s the next frontier for AI research.

Read more

The square-faced, three-legged alien shoves and jostles to get at the enormous plant taking over its tiny planet. But each bite just makes the forbidden fruit grow bigger. Suddenly the plant’s weight flips the whole sphere upside down and all the little creatures drop into space.

Quick! Reach in and catch one!

Agence, a short interactive VR film from Toronto-based movie studio Transitional Forms, won’t be breaking any box office records. Falling somewhere in the no-man’s-land between movies and video games, it may struggle to find an audience at all. But as the first example of a film that uses reinforcement learning to control its animated characters, it could be a glimpse into the future of filmmaking.

“I am super passionate about artificial intelligence because I believe that AI and movies belong together,” says the film’s director, Pietro Gagliano.

Gagliano previously won the first-ever Emmy for a VR experience in 2015. Now he and producer David Oppenheim, who works at the National Film Board of Canada, are experimenting with a kind of storytelling they call dynamic film. “We see Agence as a sort of silent-era dynamic film,” says Oppenheim. “It’s a beginning, not a blockbuster.”

Agence was debuted at the Venice International Film Festival last month and was released this week to watch/play via Steam, an online video-game platform. The basic plot revolves around a group of creatures and their appetite for a mysterious plant that appears on their planet. Can they control their desire, or will they destabilize the planet and get tipped to their doom? Survivors ascend to another world. After several ascensions, there is a secret ending, says Oppenheim.  

Gagliano and Oppenheim want viewers to have the option of sitting back and watching a story unfold, with the AI characters left to their own devices, or getting involved and changing the action on the fly. There’s a broad spectrum of interactivity, says Gagliano: “A lot of interactive films have decision moments, when you can branch the narrative, but I wanted to create something that let you transform the story at any point.”

A certain degree of interactivity comes from choosing the type of AI that controls each character. You can make some use rule-based AI, which guides the character using simple heuristics—if this happens, then do that. Then you can make others become reinforcement-learning agents trained to seek rewards however they like, such as fighting for a bite of the fruit. Characters that follow rules stick closer to Gagliano’s direction; RL agents inject some chaos.

But you can also lean in. Using VR controls or a game pad, you can grab characters and move them around, plant more giant flowers, and help balance the planet. The characters carry on with their business around you, seeking their rewards as best they can.

The film got some interest in Venice, says Oppenheim: “A lot of people come looking for that mix of story and interactivity. Introducing AI into the mix was something that people responded really well to.”

Gagliano’s mother also likes it. When he showed it to her, she spent the whole time breaking up fights between the creatures. “She was like, ‘You behave! You go back here and you play nicely,’” he says. “That was a storyline I wasn’t expecting.”

But people expecting a game have had a cooler response. “Gamers treat it more as a puzzle,” says Oppenheim. And the short running time and lack of challenge have put off some online reviewers.

Still, the pair see Agence as a work in progress. They want to collaborate with other AI developers to give their characters different desires, which would lead to different stories. In the long run, they think, they could use AI to generate all parts of a film, from character behavior to dialogue to entire environments. It could create surprising, dreamlike experiences for all of us, says Oppenheim. 

Read more
1 2,373 2,374 2,375 2,376 2,377 2,382