Ice Lounge Media

Ice Lounge Media

The prospect of truly zero contact delivery seems closer — and more important — than ever with the pandemic changing how we think of last mile logistics. Autonomous delivery executives from FedEx, Postmates, and Refraction AI joined us to talk about the emerging field at TechCrunch Mobility 2020.

FedEx VP of Advanced Technology and Innovation Rebecca Yeung explained why the logistics giant felt that it was time to double down on its experiments in the area of autonomy.

“COVID brought the term ‘contactless’ — before that not many people are talking about contactless; Now it’s almost a preferred way of us delivering,” she said. “So we see, from government to consumers, open mindedness about, maybe in the future you would have everything delivered to you through autonomous means, and that’s the preferred way.”

“If you looked up Postmates robots on Twitter or Instagram, people are always kind of questioning, what is this? What is it doing? Everything changed overnight with COVID, where people would see the robot and immediately understand, oh, this is for contactless delivery,” said Postmates VP of special projects Ali Kashani. “Everything suddenly made sense.”

He also explained how the seeming constraints of a robotic platform specific to food delivery made the engineering process, if not easier, at least naturally bounded by the data they’d collected.

“It’s kind of one of the advantages of being so close to the market, we can use data from our platform to drive certain decisions, because you don’t want to over-engineer you also don’t want to under-engineer,” Kashani said. “We actually developed simulations that would put robots in any location in the country on some date in the past. It would tell us, how many deliveries did this robot do? How many hours was it outside? How many miles did it travel? And it would use that information to decide exactly what kind of battery life do we need? Does it need to carry drinks? How many drink holders should it have to cover 99% of deliveries?”

Matthew Johnson-Roberson, co-founder and CTO of Refraction AI, noted that the pandemic has raised interest and demand, but also highlighted where things need to move forward in different ways.

“Obviously no one wants a global pandemic, but it has certainly energized this industry and put more attention on it,” he said. “Everybody is excited, oh, we’re going to have contactless delivery, it’s going to be great. But I think there are some real challenges that need to be addressed as an industry to get there. One of them is social acceptance, the other’s regulation. That’s starting to change because of COVID. I’m hopeful that this is an inflection point, and that we really do see more serious investment in this, but also widespread deployment, so it’s not a tech demo that you get to see once in one place, but it actually begins to take over some sizable bit of the market.”

Yeung also emphasized the need for the infrastructure that supports these autonomous platforms: “Thinking about the future, commercial launch, you need the dynamic routing, you need the dispatch system, you need the user interface, you need a tracking interface. We see great synergy for us to leverage for all sorts of autonomous applications.”

In discussing the danger of replacing human workers with robots, Yeung and Kashani were sanguine, suggesting like others in the robotics industry that there would be a shift in labor but it won’t kill any jobs. Johnson-Roberson disagreed.

“I think we are going to be replacing jobs, and we need to face that head on,” he said. “I think it’s important that we reckon with that, that a lot of these decisions, they have a long history of not thinking through what hte human consequences will be. So I’m an advocate for saying, look, we’re replacing jobs. Let’s think as a society: How do we address that? How do we deal with it? I think that we could live in a future with more just, fairer jobs with health insurance, more benefits. But I don’t think it is going to look how it looks today.”

Read more

Nvidia is in the process of acquiring chip designer Arm for $40 billion. Coincidentally, both companies are also holding their respective developer conferences this week. After he finished his keynote at the Arm DevSummit, I sat down with Arm CEO Simon Segars to talk about the acquisition and what it means for the company.

Segars noted that the two companies started talking in earnest around May 2020, though at first, only a small group of executives was involved. Nvidia, he said, was really the first suitor to make a real play for the company — with the exception of SoftBank, of course, which took Arm private back in 2016 — and combining the two companies, he believes, simply makes a lot of sense at this point in time.

“They’ve had a meteoric rise. They’ve been building up to that,” Segars said. “So it just made a lot of sense with where they are at, where we are at and thinking about the future of AI and how it’s going to go everywhere and how that necessitates much more sophisticated hardware — and a much more sophisticated software environment on which developers can build products. The combination of the two makes a lot of sense in this moment.”

The data center market, where Nvidia, too, is already a major player, is also an area where Arm has heavily focused in recent years. And while it goes up against the likes of Intel, Segars is optimistic. “We’re not in it to be a bit player,” he said. “Our goal is to get a material market share and I think the proof to the pudding is there.”

He also expects that in a few years, we’ll see Arm-powered servers available on all of the major clouds. Right now, AWS is ahead in this game with its custom-built Gravitron processors. Microsoft and Google do not currently offer Arm-based servers.

“With each passing day, more and more of the software infrastructure that’s required for the cloud is getting ported over and optimized for Arm. So it becomes a more and more compelling proposition for sure,” he said, and cited both performance and energy efficiency as reasons for cloud providers to use Arm chips.

Another interesting aspect of the deal is that we may just see Arm sell some of Nvidia’s IP as well. That would be a big change — and a first — for Nvidia, but Segars believes it makes a lot of sense to do so.

“It may be that there is something in the portfolio of Nvidia that they currently sell as a chip that we may look at and go, ‘you know, what if we package that up as an IP product, without modifying it? There’s a market for that.’ Or it may be that there’s a thing in here where if we take that and combine it with something else that we were doing, we can make a better product or expand the market for the technology. I think it’s going to be more of the latter than it is the former because we design all our products to be delivered as IP.”

And while he acknowledged that Nvidia and Arm still face some regulatory hurdles, he believes the deal will be pro-competitive in the end — and that the regulators will see it the same way.

He does not believe, by the way, that the company will face any issues with Chinese companies not being able to license Arm’s designs because of export restrictions, something a lot of people were worried about when the deal was first announced.

“Export control of a product is all about where was it designed and who designed it,” he said. “And of course, just because your parent company changes, doesn’t change those fundamental properties of the underlying product. So we analyze all our products and look at how much U.S. content is in there, to what extent are our products subject to U.S. export control, U.K. export control, other export control regimes? It’s a full-time piece of work to make sure we stay on top of that.”

Here are some excerpts from our 30-minute conversation:

TechCrunch: Walk me through how that deal came about? What was the timeline for you?

Simon Segars: I think probably around May, June time was when it really kicked off. We started having some early discussions. And then, as these things progress, you suddenly kind of hit the ‘Okay, now let’s go.’ We signed a sort of first agreement to actually go into due diligence and then it really took off. It went from a few meetings, a bit of negotiation, to suddenly heads down and a broader set of people — but still a relatively small number of people involved, answering questions. We started doing due diligence documents, just the mountain of stuff that you go through and you end up with a document. [Segars shows a print-out of the contract, which is about the size of two phone books.]

You must have had suitors before this. What made you decide to go ahead with this deal this time around?

Well, to be honest, in Arm’s history, there’s been a lot of rumors about people wanting to acquire Arm, but really until SoftBank in 2016, nobody ever got serious. I can’t think of a case where somebody actually said, ‘come on, we want to try and negotiate a deal here.’ And so it’s been four years under SoftBank’s ownership and that’s been really good because we’ve been able to do what we said we were going to do around investing much more aggressively in the technology. We’ve had a relationship with Nvidia for a long time. [Rene Haas, Arm’s president of its Intellectual Property Group, who previously worked at Nvidia] has had a relationship with [Nvidia CEO Jensen Huang] for a long time. They’ve had a meteoric rise. They’ve been building up to that. So it just made a lot of sense with where they are at, where we are at and thinking about the future of AI and how it’s going to go everywhere and how that necessitates much more sophisticated hardware — and a much more sophisticated software environment on which developers can build products. The combination of the two makes a lot of sense in this moment.

How does it change the trajectory you were on before for Arm?

Read more

Google rebrands G Suite, Apple announces its next event date and John McAfee is arrested. This is your Daily Crunch for October 6, 2020.

The big story: G Suite becomes Google Workspace

To a large extent, Google Workspace is just a rebranding of G Suite, complete with a new set of (less distinctive) logos for Gmail, Calendar, Drive, Docs and Meet. But the company is also launching a number of new features.

For one thing, Google is (as previously announced) integrating Meet, Chat and Rooms across applications, with Gmail as the service where they really come together. Other features coming soon are the ability to collaborate on documents in Chats and a “smart chip” with contact details and suggested actions that appear when you @mention someone in a document.

Pricing remains largely the same, although there’s now an $18 per user per month Business Plus plan with additional security features and compliance tools.

The tech giants

Apple will announce the next iPhone on October 13 — Apple just sent out invites for its upcoming hardware event, all but confirming the arrival of the next iPhone.

Facebook’s Portal adds support for Netflix, Zoom and other features — The company will also introduce easier ways to launch Netflix and other video streaming apps via one-touch buttons on its new remote.

Instagram’s 10th birthday release introduces a Stories Map, custom icons and more — There’s even a selection of custom app icons for those who have recently been inspired to redesign their home screen.

Startups, funding and venture capital

SpaceX awarded contract to help develop US missile-tracking satellite network — The contract covers creation and delivery of “space vehicles” (actual satellites) that will form a constellation offering global coverage of advance missile warning and tracking.

Salesforce Ventures launches $100M Impact Fund to invest in cloud startups with social mission — Focus areas include education and reskilling, climate action, diversity, equity and inclusion, as well as providing tech for nonprofits and foundations.

Ÿnsect, the makers of the world’s most expensive bug farm, raises another $224 million — The team hopes to provide insect protein for things like fish food and fertilizer.

Advice and analysis from Extra Crunch

Inside Root’s IPO filing — As insurtech booms, Root looks to take advantage of a warm market and enthusiastic investors.

To fill funding gaps, VCs boost efforts to find India’s standout early-stage startups — Blume Ventures’ Karthik Reddy says, “There’s an artificial skew toward unicorns.”

A quick peek into Opendoor’s financial results — Opendoor’s 2020 results are not stellar.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

John McAfee arrested after DOJ indicts crypto millionaire for tax evasion — The cybersecurity entrepreneur and crypto personality’s wild ride could be coming to an end after he was arrested in Spain and now faces extradition to the U.S.

Trump is already breaking platform rules again with false claim that COVID-19 is ‘far less lethal’ than the flu — Facebook took down Trump’s post, while Twitter hid it behind a warning.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read more

The news: Facebook announced on Tuesday that it will remove “any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent content.” QAnon, the pro-Trump conspiracy theory centering on the belief that the president of the United States is at war with a secret satanic pedophile ring run by liberals, has grown into an “omniconspiracy” in recent months. Accordingly, it has become a powerful distributor of conspiratorial thinking on a variety of topics—including misinformation about the pandemic and the presidential elections.  

The context: This goes further than the less intense ban announced in August. At the time, Facebook said it would remove pages, groups, and accounts containing “discussions of potential violence.” By then, QAnon had inspired a growing list of destructive, sometimes violent, acts. In 2019, the FBI concluded that QAnon was potentially capable of inspiring violence.  

Why now? QAnon flourished for years on social media before this summer, and many critics felt that Facebook’s partial ban was too little, too late. But it was likely prompted by the theory’s staggering growth on social media since March (an internal Facebook study this summer found that QAnon-associated groups had millions of members). Today’s announcement referred to QAnon’s involvement in spreading dangerous misinformation during the wildfires in the western United States as another reason for the more aggressive ban. 

Brian Friedberg, a senior researcher at the Harvard Shorenstein Center’s Technology and Social Change Project who has been tracking QAnon since its early days, said in a text message that while the announcement will likely fuel rumors among QAnon supporters that this ban amounts to “election interference” against Trump, the timing suggests that Facebook is trying to “AVOID further spread of election disinfo” from QAnon’s distribution networks by acting now. 

QAnon believers were expecting this: Although QAnon has a large presence on Facebook, its believers are present on most social-media platforms, and believers have been talking about a more intense Facebook and Twitter crackdown for a while. They had time to prepare, and at this point, they have some experience learning how to work around bans. For instance, the “Q” account at the center of the conspiracy theory recently instructed followers to “camouflage” themselves online and drop references to “Q” or “QAnon” in order to avoid bans targeting those keywords. The community’s immediate reaction to Facebook’s announcement, Friedberg said, was to use Twitter to promote alternative locations for QAnon believers to organize online. Gab, a social-media site that is popular with the far right, has already started to court QAnon believers and influencers.

Read more

Last week Texas governor Greg Abbott became the latest Republican to attack America’s vote-by-mail system. A proclamation he issued claimed that the threat of “illegal voting” justifies a dramatic decrease in the number of places that voters can hand-deliver their mail-in ballots. 

His announcement limited each county to just one drop-off point, forcing vast areas with millions of voters to use a single ballot location in the middle of a pandemic.

Abbott argues that reducing the number of places mail-in ballots can be dropped off will increase security. But Eddie Perez, a Texas-based election administration expert with the nonpartisan OSET Institute, says there are already stringent rules in place to protect ballot drop-offs from fraud.

“Abbot’s justifications don’t hold water,” he says. “The procedures for voters when dropping off ballots include many security requirements relating to preserving integrity.”

In fact, it already takes a lot to drop off your ballot in Texas.

“Examples include the need to present identification, the requirement that voters can only drop off a ballot for themselves, and the need to sign signature rosters,” he says. “Remember: all of these activities take place as ballots are delivered into the hands of election officials at officially designated election locations. None of this happens willy-nilly.”

Just one day before Abbott’s announcement, the county clerk in Texas’s Travis County—home to 1.2 million people and the state capital of Austin—shared pictures of groups of election officials staffing the locations in order to enforce rules and ensure integrity. 

Abbott’s move will disproportionately impact high-population counties that are Democratic strongholds in the state. Harris County—which includes Houston and has a population over 4 million—and Dallas County, population 2.6 million, both voted for Hillary Clinton in the 2016 election. 

Abbott’s plan “increases the likelihood that more ballots from eligible voters will not be turned in and counted,” Perez says. “Even in the best of circumstances that don’t include a public health crisis, Harris County is around 2,000 square miles. It is larger than some states in the union. Even without a pandemic, this makes it harder to vote.”

Why use drop-offs at all? 

Ballot drop boxes are designed to accept vote-by-mail ballots in a way that doesn’t pressure the postal service, itself under stress, to keep up.

They are a “critical part of a mail balloting regime,” says Charles Stewart III, the founder of the MIT Election Lab. “They rose because of voter demand. Voters prefer to return mail ballots in person to drop boxes. As drop boxes have become more common, voter confidence has increased in those states.”

There’s a broad consensus among academics and election officials that mail-in voting and ballot drop boxes are secure, that fraud is extremely rare, and that drop sites expand access to voting. But vote-by-mail has been hit by disinformation more than almost any other topic during this election.

At the first presidential debate, President Donald Trump called vote-by-mail a fraud “disaster.” That’s wrong. In fact, vote-by-mail has been expanded nationally over decades on a bipartisan basis, and there’s no evidence of widespread fraud. Between 2000 and 2012, billions of votes were cast, but the total number of vote-by-mail fraud cases prosecuted was 491.

“Disinformation about drop boxes leading to fewer of them in states would have the effect of limiting voter access to the ballot box,” says David Levine from the Alliance for Securing Democracy, a bipartisan security group. “Drop boxes are secure. They can be more convenient for voters. They enable a safer voting experience. If there was any evidence to support the assertion that drop boxes contribute to fraud, we wouldn’t see an increase in them across the country.”

Walking it back

It’s not the first time Governor Abbott has tried to push the idea of voter fraud. His last attempt didn’t go to plan, however.

Last year the Texas Republican Party launched a high-profile inquiry into illegal votes, echoing Trump’s rhetoric and questioning the citizenship of 95,000 voters. But the review quickly collapsed when it became apparent that many of those people had already been cleared as legitimate voters. The whole incident was quietly walked back.

So what next? Civil rights organizations have launched two court challenges to Abbott’s order. The state’s Democrats called it a “blatant voter suppression tactic.”

“The bottom line,” says Perez, “is when you completely divorce the way that the election is talked about from the actual sober facts of how elections are administered, you’re really creating dynamics where division and chaos can overwhelm the accurate and methodical counting of votes.”

Read more

The context: When Twitter announced it would start removing tweets expressing hope that President Trump would die of covid-19, a number of users—notably women and people of color in politics— openly asked why Twitter didn’t seem to be enforcing the same rules against abuse and threats in their own mentions. 

On Monday, the Institute for Strategic Dialogue, a London-based think tank that researches extremism, released some timely data showing that some of the same politicians calling out Twitter’s inaction are indeed facing more attacks online than other politicians. 

The study: Researchers collected publicly tagged mentions on Twitter and Facebook for a handful of politicians for two weeks in June and July, scrutinizing them manually and using AI to identify abusive posts. 

Overall, researchers found that women and people of color were “far more likely than men to be abused on Twitter.” They found that women received an average of 12% more abuse on Facebook than male politicians. Between 5% and 10% of mentions of most male politicians were considered abusive, while mentions of female politicians on Twitter contained abuse between 15% and 39% of the time. 

Overall, women were targeted much more personally by the tweets in the study. While male politicians primarily faced abuse that used general terms, women—and in particular, Representatives Alexandria Ocasio-Cortez and Nancy Pelosi—were attacked with deeply personal and gendered language. 

The conclusion: The fact that these groups face much more harassment should be surprising to literally nobody who has been on Twitter. But the study focuses on such a small number of politicians that it is easy to read too much into the specifics. 

However, the report helps quantify one of the platform’s longest-running issues at a crucial moment, as Kamala Harris, the Democrats’ vice presidential candidate, prepares to debate Vice President Mike Pence, and as activists raise alarms about online voter suppression campaigns. 

The findings may not really be new, but they reinforce what women and people of color have been saying about Facebook, Twitter, and other major social-media companies for years: that while the sites may increasingly have policies banning abusive behavior, the enforcement of those policies often leaves its most common targets open to sustained, coordinated harassment. 

Read more

The news: The US Centers for Disease Control and Prevention has updated its guidelines to acknowledge that the coronavirus can be spread by tiny particles that linger in the air. The agency said it made the decision because of the mounting evidence that people with covid-19 can infect people even if they are more than six feet away, or shortly after the infected person left the area. These cases all occurred in poorly ventilated and enclosed spaces, and often involved activities that cause heavier breathing, like singing or exercise. However, “the CDC continues to believe, based on current science, that people are more likely to become infected the longer and closer they are to a person with COVID-19,” it said in a statement. The long-coming update could help to finally clarify the situation after the CDC published guidance acknowledging airborne transmission and then suddenly retracted it last month.

The significance: Evidence that airborne transmission is occurring has been mounting for months; 239 experts wrote an open letter to the World Health Organization in July calling for them to acknowledge it. The WHO still has not recognized airborne transmission as a significant factor in the pandemic, and the CDC’s slowness to do so has caused frustration among aerosol researchers, some of whom say it is the main route for infections. The CDC maintains it occurs only in “limited, uncommon” circumstances. Airborne transmission has become a topic of fierce contention, partly because it makes it far riskier to reopen spaces like restaurants, gyms, bars, schools, and offices.

What do we do now? The CDC advises that people stay at least six feet away from others, wear a mask that covers their nose and mouth, frequently wash their hands, clean high-touch surfaces often, and stay home when they are feeling sick. However, the implications of airborne transmission mean the CDC perhaps ought to shift its emphasis and go further, advising people to properly ventilate buildings, limit the number of people indoors at any given time while encouraging them to stay farther apart and masked, and try to socialize outdoors where possible. “The thing people need to understand is aerosol transmission is like everyone breathing out cigarette smoke, and you want to breathe in as little of others’ as possible. Everyone you are around, imagine they are breathing smoke, and try to avoid it,” Jose-Luis Jimenez, a chemistry professor at the University of Colorado, Boulder, who has studied aerosols for 20 years, told MIT Technology Review in an interview.

Pushback: Jimenez criticized this new CDC update for being written confusingly, using the phrase “small droplets” instead of the widely accepted word “aerosols.” Crucially, Jimenez said the document downplays the importance of airborne transmission. “We know that superspreading events are a major component of transmission. And every single superspreading event that has been studied appears to be dominated by aerosol transmission,” he said. Jimenez also said that the CDC update appears to suggest airborne transmission from sharing a room together is rare, when it is not. “For example, it is the most likely explanation for the outbreak at the White House,” he said.

Read next: This scientist made a Google Doc to educate the public about airborne coronavirus transmission

This story was updated after publishing to include comments from Jose-Luis Jimenez.

Read more

On a scorching day this August, Caleb Woodall wielded his shovel like a spear, stabbing it into the hardened crust of an asbestos-filled pit near Coalinga, California.

Woodall, a graduate student at Worcester Polytechnic Institute in Massachusetts, was digging out samples from an asbestos mine that’s been shuttered since 1980, a Superfund site on the highest peak in the state’s Diablo Range. He extracted pounds of the material from several locations across San Benito Mountain, shoveled them into Ziploc bags, and shipped them to a pair of labs for analysis.

He and his colleagues are trying to determine the makeup and structure of the materials pulled from the pits, and to answer two critical questions: How much carbon dioxide do they contain—and how much more could they store?

The vast surface area of certain types of fibrous asbestos, a class of carcinogenic compounds once heavily used in heat-resistant building materials, makes them particularly good at grabbing hold of the carbon dioxide molecules dissolved in rainwater or floating through the air.

That includes the most common form of asbestos, chrysotile, a serpentine mineral laced throughout the mountain (serpentine is California’s state rock). The reaction with carbon dioxide mainly produces magnesium carbonate minerals like magnesite, a stable material that could lock away the greenhouse gas for millennia.

Woodall and his advisor Jennifer Wilcox, a carbon removal researcher, are among a growing number of scientists exploring ways to accelerate these otherwise slow reactions in hopes of using mining waste to fight climate change. It’s a handy carbon-capturing trick that may also work with the calcium- and magnesium-rich by-products of nickel, copper, diamond, and platinum mining.

The initial hope is to offset the ample carbon emissions from mining itself using these minerals already extracted in the process. But the real hope is that this early work allows them to figure out how to effectively and affordably dig up minerals, potentially including asbestos, specifically for the purpose of drawing down vast amounts of greenhouse gas from the atmosphere.

“Decarbonizing mines in the next decade is just helping us to build confidence and know-how to actually mine for the purpose of negative emissions,” says Gregory Dipple, a professor at the University of British Columbia and one of the leading researchers in this emerging field.

Accelerating a very slow cycle

The UN’s climate panel found that any scenario that doesn’t warm the planet by more than 1.5 ˚C will require nearly eliminating emissions by midcentury, as well as removing 100 billion to 1 trillion metric tons of carbon dioxide from the air this century. Keeping warming below 2˚ C could necessitate sucking out 10 billion tons a year by 2050 and 20 billion annually by 2100, a study by the National Academies found.

That’s such a giant amount that we’ll almost certainly need to use a variety of methods to get anywhere close, including planting trees and increasing carbon uptake in agricultural soils. The particular promise of using minerals to pull down carbon dioxide is that it can be done on a massive scale—and would effectively store it away forever.

collecting minerals
Caleb Woodall deposits asbestos samples into a Ziplock bag for later analysis.
ROGER AINES, LAWRENCE LIVERMORE NATIONAL LAB

Mineralization is already the main mechanism nature uses in the so-called “slow carbon cycle.” The carbon dioxide in rainwater dissolves basic rocks, producing magnesium, calcium, and other compounds that make their way into the oceans. There, marine life converts the materials into shells and skeletons that eventually turn into limestone and other rock types.

There are more than enough minerals to tie up all the carbon dioxide we’ve ever emitted and more. The problem is that the vast majority are locked away in solid rock that doesn’t come into contact with the greenhouse gas. Even when they’re exposed in rock outcroppings, it takes a long time for these reactions to occur.

But a variety of interventions can transform the natural slow carbon cycle into a faster one. Those include physical processes like simply digging up the materials, grinding them down into finer particles, and spreading them in thin layers, all of which increases the reactive surface area exposed to carbon dioxide. There are also ways to speed up the chemical reactions by adding heat or compounds like acids.

“This is the giant, untapped opportunity that could remove enormous amounts of CO2,” says Roger Aines, head of the Carbon Initiative at Lawrence Livermore National Lab, who accompanied Woodall on the California field trip.

The right recipe

Dipple is exploring a variety of ways to do this.

In a pilot project last year, funded by the diamond company De Beers and Natural Resources Canada, he and colleagues used tailings from a mine in Canada’s Northwest Territories to ensnare carbon dioxide released from a tank. The point was to evaluate the possibility of using minerals to capture and store the gas from the flue stream of a power plant.

The team is now conducting a field trial for a proposed nickel plant in British Columbia. They’ve placed tailings from exploratory drilling into assorted containers, and are measuring the reaction rates that result from using different chemical additives and processes under different weather conditions. But they expect that simply adding water and effectively tilling the materials will rapidly remove carbon dioxide from the air, forming a solid block that can be buried.

Because the proposed operation would run primarily on hydroelectric power, they estimate that putting to use just 30% of the most reactive tailings from the mines would make the operation carbon neutral. Using about 50% would make it carbon negative.

But not all mine tailings are created equal. In a separate project, Wilcox and Woodall are conducting fieldwork at a platinum, palladium, and nickel mine in Montana, in hopes of developing ways to accelerate carbon-capturing reactions with less-than-ideal by-products. The main minerals in the tailings there are plagioclase feldspars, which hold magnesium and calcium in a tight chemical structure, making them less reactive than other types of mine waste.

Back in the lab, they’re testing whether applying heat and adding ammonium salts and certain weak acids can break down the bonds, freeing up more calcium and magnesium to grab hold of carbon dioxide.

“If we can come up with a recipe on all these different tailings, the opportunities could explode,” Wilcox says.

Next steps

Woodall is exploring asbestos sites because he hopes to find one that might work well for a subsequent field trial to evaluate ways of accelerating carbon uptake.

The approaches could include spreading the material out to increase the reactive surface area, running fans that increase the amount of air flowing over the asbestos, or directly injecting concentrated carbon dioxide into the mineral pits.

Over time, these processes should form a mix of loosely bound rock and dirt, mainly composed of magnesium carbonates, bicarbonate, and calcium carbonate, that could simply be left in place, Aines says. Converting the asbestos would help to clean up these areas as well.

But is it safe to blow air around asbestos? And would such efforts fully remediate these toxic sites?

Mineral collection near a pond.

ROGER AINES, LAWRENCE LIVERMORE NATIONAL LAB

Given the health risks of asbestos, where—or even whether—any subsequent work takes place will depend on the determinations of scientific oversight boards and regulatory officials.

It’s possible that some amount of asbestos would remain or could be dispersed in the course of doing the work, Aines says. Those are among the key questions that would need to be tested, he adds.

It’s also why it’s important to do such work at a restricted site, and why any research or subsequent full-scale efforts would need to follow the clear rules and processes for working with these materials. Woodall stresses they would take all the necessary precautions, including spraying down the materials with water to prevent asbestos from floating around, as well as using sensors to monitor exposure levels.

Coming challenges

Ultimately, mine tailings on their own won’t get us very far.

Woodall estimates that one asbestos site in Vermont, with about 30 million tons of waste, could capture as much as 12 million tons of carbon dioxide. Mines globally produce enough mineral by-products to capture nearly 40 million tons of carbon dioxide per year, according to the National Academies study.

But all that is just a tiny fraction of the billions of tons of carbon dioxide that must be captured to meaningfully address climate change. So getting anywhere near the necessary scale will requiring digging up more of the minerals.

Woodall and Aines both say that could include asbestos, given how reactive it is, if field trials show the process is effective and safe.

But that idea is sure to raise serious concerns given the health risks posed by asbestos. And there are lots of other mineral options, even if they’re not quite as ideal.

Other research groups and nonprofits are already looking at ways to put additional minerals to work once they’re extracted, including: spreading ground-down olivine along beaches or sprinkling basalt dust onto farmland to absorb carbon dioxide and help fertilize crops.

Mining for any materials on a far larger scale, however, will face a number of challenges. Mining itself is environmentally destructive. All the energy required to extract, grind, distribute, and process the minerals will eat into any emissions reductions. And there could be serious limits on the available land, particularly since it can take years for most of the minerals to react with carbon dioxide.

For example, removing 2.5 billion tons of CO2 per year using magnesium oxide would require a 10-centimeter-thick (nearly 4 inches) layer covering about 15,000 square kilometers (almost 5,800 square miles), according to a Nature Communications paper in July. That’s equivalent to a little more than 5% of Nevada.

But the major stumbling block is the cost. Wilcox says it can run more than $200 per ton all-in, which is far more expensive than planting trees.

It’s possible that some of the materials could go into commercial products, like the aggregates in concrete, to defray the costs. Some level of voluntary carbon offsets, where people or corporations pay to balance out their own emissions, could help as well. But getting to the scale of billions of tons, most observers believe, will take aggressive public policies that put high prices on carbon pollution or create generous incentives for removing it.

Read more

Want to go live on YouTube without expensive third-party tools? Wondering how to use your computer to easily stream live on YouTube? In this article, you’ll discover how to set up, schedule, and broadcast live from your computer using YouTube Studio. You’ll also find helpful features for engaging with your live viewers. To learn how […]

The post How to Easily Go Live on YouTube From a Computer appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,364 2,365 2,366 2,367 2,368 2,380