Ice Lounge Media

Ice Lounge Media

The FAA has published its updated rules for commercial space launches and reentries, streamlining and modernizing the large and complicated set of regulations. With rockets launching in greater numbers and variety, and from more providers, it makes sense to get a bit of the red tape out of the way.

The rules provide for licensing of rocket launch operators and approval of individual launches and reentry plans, among other things. As you can imagine, such rules must be complex in the first place, more so when they’ve been assembled piecemeal for years to accommodate a quickly moving industry.

U.S. Transportation Secretary Elaine Chao called the revisions a “historic, comprehensive update.” They consolidate four sets of regulations and unify licensing and safety rules under a single umbrella, while allowing flexibility for different types of operators or operations.

According to a press release from the FAA, the new rules allow:

  • A single operator’s license that can be used to support multiple launches or reentries from potentially multiple launch site locations.
  • Early review when applicants submit portions of their license application incrementally.
  • Applicants to negotiate mutually agreeable reduced time frames for submittals and application review periods.
  • Applicants to apply for a safety element approval with a license application, instead of needing to submit a separate application.
  • Additional flexibility on how to demonstrate high consequence event protection.
  • Neighboring operations personnel to stay during launch or reentry in certain circumstances.
  • Ground safety oversight to be scoped to better fit the safety risks and reduce duplicative requirements when operating at a federal site.

In speaking with leaders in the commercial space industry, a common theme is the burden of regulation. Any reform that simplifies and unifies will likely be welcomed by the community.

The actual regulations are hundreds of pages long, so it’s still hardly a simple task to get a license and start launching rockets. But at least it isn’t several sets of 500-page documents that you have to accommodate simultaneously.

The new rules have been submitted for entry in the Federal Register, and will take effect 90 days after that happens. In addition, the FAA will be putting out Advisory Circulars for public comment — additions and elaborations on the rules that the agency says there may be as many as two dozen of in the next year. You can keep up with those here.

Read more

If you’re reading this, you probably didn’t get here from Twitter . The service has been experiencing widespread reports of outages for at least an hour. The issue has impacted a range of different activities on the site, ranging from newsfeeds to the ability to tweet. The company has acknowledged the ongoing problem, noting on its official status page that it is investigating things:

Update – We are continuing to monitor as our teams investigate. More updates to come.
Oct 15, 22:31 UTC
Investigating – We are currently investigating this issue. More updates to come.
Oct 15, 21:56 UTC

Twitter responded to our request for comment, stating, “We know people are having trouble Tweeting and using Twitter. We’re working to fix this issue as quickly as possible. We’ll share more when we have it and Tweet from @TwitterSupport when we can – stay tuned.”

We’ll update as we hear more.
Read more

It’s easy to be smitten with the H4 at first sight. They’re a great-looking pair of headphones — one of the best I’ve seen. They sport a simple, streamlined design that feels both like an homage to older models, but modern enough to avoid the nostalgia trap.

They’re comfortable, too. Like crazy comfortable. I say this as someone who is prone to dull earaches after wearing most models of over-ear headphones for an extended period. Since Bang & Olufsen sent me a pair to test a while pack, I’ve been wearing them for hours on end, prepping for a write-up during Work From Home Week.

The headphones sport an abundance of padding on the rim of their perfectly round cups. My ears sit snuggly inside, with none of the padding pressing on the ear — something that’s often a source of pressure after extended wears. They’re fairly lightweight — that helps. At 8.3 ounces they fall in between the Bose QuietComfort 35 II (8.2 ounces) and Sony WH-1000XM4 (8.96 ounces).

Image Credits: Brian Heater

The cups are covered in leather — either matte black or limestone (kind of a cream) — coupled with a large brushed metal plate sporting the B&O logo. It complements the concentric circles. The right cup sports a volume rocker, power/pairing switch and a port for an auxiliary cable. The ear cups sport a nice, smooth swivel that should work well with a variety of different head sizes.

The sound is good. It’s nice and full — though B&O leans a bit too heavily on the bass for some tests. They’re not quite as egregious as other units, but it’s very noticeable, particularly with traditionally bass-heavy genres like hip-hop. If you’re looking for fuller, more true-to-life music replication, you’re going to want to look elsewhere.

The absence of active noise canceling is a pretty big blind spot for a pair of $300 headphones in 2020. Even if you think you don’t need the feature, trust me, there are plenty of times you’ll be glad you have it. Take my working from home adventures over the past six months: They just started construction directly outside of my window, and it’s the worst. The Bluetooth, too, is decent, but walking around my apartment, I found them quicker to cut out than, say, the Sonys.

Image Credits: Brian Heater

There are units with longer battery life, too. Given that the H4’s are collapsible and don’t have ANC, though, I’m guessing the company isn’t really targeting frequent fliers here. With a rated battery life of up to 19 hours, though, they’ll get you through a day of home use, no problem.

Read more

Stripe makes a big acquisition, Google rolls out search improvements and Snapchat adds a TikTok-y feature. This is your Daily Crunch for October 15, 2020.

The big story: Stripe acquires Nigeria’s Paystack

Stripe has made its biggest acquisition to date. It announced today that it bought Paystack, a Lagos-headquartered startup that makes it easy to integrate payment services — we’ve referred to it in the past as “the Stripe of Africa.”

Sources tell us that the acquisition price was more than $200 million.

In an interview with TechCrunch, Stripe CEO Patrick Collison said that expanding into Africa presents the company with “an enormous opportunity,” adding that Stripe is planning for “a longer time horizon” than most other companies: “We are thinking of what the world will look like in 2040-2050.”

The tech giants

Google launches a slew of Search updates — These new AI-focused improvements include the ability to better answer questions with very specific answers, as well as a new algorithm to better handle the typos in your queries.

Snapchat launches its TikTok rival, Sounds on Snapchat — Snapchat made good on its promise to release a new feature that would allow users to set their Snaps to music.

Mario Kart Live: Home Circuit review — Bryce Durbin offers an illustrated look at a new edition of Mario Kart that incorporates a real remote-controlled car.

Startups, funding and venture capital

River, the latest venture from Wander founder Jeremy Fisher, launches with $10.4M in funding — River is meant to rethink the way we consume content across the internet.

Small business payments and marketing startup Fivestars raises $52.5M — It’s a difficult time for small businesses, and Fivestars CEO Victor Ho said that many of the big digital platforms aren’t helping.

Bipedal robot developer Agility announces $20M raise — Agility’s Digit is a package delivery robot capable of navigating stairs and other terrain.

Advice and analysis from Extra Crunch

News that Calm seeks more funding at a higher valuation is not transcendental thinking — We rewind the clock and review data from 2018, 2019 and 2020 about the meditation app.

Brighteye Ventures’ Alex Latsis talks European edtech funding in 2020 — European edtech firm Brighteye Ventures recently announced the $54 million first close of its second fund.

Tesla’s decision to scrap its PR department could create a PR nightmare — The move effectively makes founder Elon Musk the company’s lone voice.

(Reminder: Extra Crunch is our subscription membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

New Oxford machine learning-based COVID-19 test can provide results in under 5 minutes —  The test also offers advantages when it comes to detecting actual virus particles, instead of antibodies or other signs of the presence of the virus.

When was the last time you worked out your soul? — Another discussion of wellness startup funding, this time via the latest episode of the Equity podcast.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read more

Hackers played a significant role in the 2016 election, when the Russian government hacked into the Democratic campaign and ran an information operation that dominated national headlines. American law enforcement, intelligence services, and even Republican lawmakers have concluded, repeatedly, that Moscow sought to interfere with the election in favor of Donald Trump.

Meanwhile, in the last four years, ransomware has exploded into a multibillion-dollar business. It’s a type of malware that hackers use to restrict access to data or machines until they’re paid ransoms that can run into the tens of millions of dollars. There’s now a global extortion industry built on the fact that the critical infrastructure and digital systems we rely on are deeply vulnerable. 

Put those two things together, and you get the nightmare scenario many election security officials are focused on: that ransomware could infect and disrupt election systems in some way, perhaps by targeting voter registration databases on the eve of Election Day. Steps to prevent such attacks are well under way.

Tackling TrickBot

In the past month, the US military and Microsoft have thrown two distinct and apparently uncoordinated haymakers at the world’s largest botnet, TrickBot—a network of infected computers that could be used in ransomware operations, including those that could target election systems. 

US Cyber Command mounted a hacking operation to temporarily disrupt TrickBot, according to a report by the Washington Post, while Microsoft went to court to take down TrickBot’s command-and-control servers. Both operations will likely have just a short-term impact on the botnet’s operations, but that may be enough to prevent an Election Day ransomware debacle.

Meanwhile, security officials have been pushing states to set up multiple offline backups to prepare for potential attacks on voter registration databases and election results reporting systems. 

“The primary source of resilience for voter registration databases—in addition to ensuring good network segmentation, having multi-factor authentication, patching your systems—is to have offline backups,” Brandon Wales, the executive director at the Cybersecurity and Infrastructure Security Agency (CISA), told me recently in an interview for MIT Technology Review’s Spotlight On event series. “We have seen a dramatic increase in this over the last four years. States are in much better shape now than they were four years ago.” 

CISA has also pushed states to build in other security layers, such as maintaining paper backups of e-poll books and all votes cast, and doing a risk-limiting audit after the vote.

But let’s be clear: for all the worry and hype, no such attack against election infrastructure has yet occurred.

The disinformation threat

Even a wildly successful ransomware attack against election systems would slow but not prevent voting, senior officials have said repeatedly. Instead, the real threat to election security would come in the aftermath.

“Whether it’s a nation-state or cybercriminal, whether the attack is successful or not, the biggest concern is the disinformation that will arise,” says Allan Liska, an intelligence analyst at the cybersecurity firm Recorded Future. “It’s a worry because people already have shaky confidence.”

A ransomware attack against election systems would give fuel to unfounded conspiracy theories that the election is rigged, unreliable, or being stolen. Take the widespread conspiracy theories over “mail dumping,” another attempt to undermine confidence in the election.

If any ransomware attack were to happen, then widespread disinformation about the vote itself would no doubt spread. And by the time such disinformation was debunked by traditional media or removed by social-media platforms, it might have reached millions of people. The biggest offender here is the president of the United States, who has proved an adept manipulator of the traditional press to push his disinformation campaign.

This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to sign up for regular updates.

Read more

Technology companies provide much of the critical infrastructure of the modern state and develop products that affect fundamental rights. Search and social media companies, for example, have set de facto norms on privacy, while facial recognition and predictive policing software used by law enforcement agencies can contain racial bias.

In this episode of Deep Tech, Marietje Schaake argues that national regulators aren’t doing enough to enforce democratic values in technology, and it will take an international effort to fight back. Schaake—a Dutch politician who used to be a member of the European parliament and is now international policy director at Stanford University’s Cyber Policy Center—joins our editor-in-chief, Gideon Lichfield, to discuss how decisions made in the interests of business are dictating the lives of billions of people. 

Also this week, we get the latest on the hunt to locate an air leak aboard the International Space Station—which has grown larger in recent weeks. Elsewhere in space, new findings suggest there is even more liquid water on Mars than we thought. It’s located in deep underground lakes and there’s a chance it could be home to Martian life. Space reporter Neel Patel explains how we might find out. 

Back on Earth, the US election is heating up. Data reporter Tate Ryan-Mosley breaks down how technologies like microtargeting and data analytics have improved since 2016. 

Check out more episodes of Deep Tech here.

Show notes and links:

Full episode transcript:

Gideon Lichfield: There’s a situation playing out onboard the International Space Station that sounds like something out of Star Trek…

Computer: WARNING. Hull breach on deck one. Emergency force fields inoperative. 

Crewman: Everybody out. Go! Go! Go!

*alam blares* 

Gideon Lichfield: Well, it’s not quite that bad. But there is an air leak in the space station. It was discovered about a year ago, but in the last few weeks, it’s gotten bigger. And while NASA says it’s still too small to endanger the crew… well… they also still can’t quite figure out where the leak is

Elsewhere in space, new findings suggest there is even more liquid water on Mars than we thought. It’s deep in underground lakes. There might even be life in there. The question is—how will we find out?

Here on Earth, meanwhile, the US election is heating up. We’ll look at how technologies like microtargeting and data analytics have improved since 2016. That means campaigns can tailor messages to voters more precisely than ever. 

And, finally, we’ll talk to one of Europe’s leading thinkers on tech regulation, who argues that democratic countries need to start approaching it in an entirely new way.

I’m Gideon Lichfield, editor-in-chief of MIT Technology Review, and this is Deep Tech. 

The International Space Station always loses a tiny bit of air, and it’s had a small leak for about a year. But in August, Mission Control noticed air pressure on board the station was dropping—a sign the leak was expanding.

The crew were told to hunker down in a single module and shut the doors between the others. Mission Control would then have a go at pressurizing each sealed module to determine where the leak was.

As our space reporter Neel Patel writes, this process went on for weeks. And they didn’t find the leak. Until, one night…

 Neel Patel: On September 28th, in the middle of the night, the astronauts are woken up. Two cosmonauts and one astronaut that are currently on the ISS. And mission control tells them, “Hey, we think we know where the leak is, finally. You guys have to go to the Russian side of the station in the Svezda module and start poking around and seeing if you can find it.”

Gideon Lichfield: Okay. And so they got up and they got in the, in the module and they went and poked around. And did they find it? 

Neel Patel: No, they have still not found that leak yet. These things take a little bit of time. It’s, you know, you can’t exactly just run around searching every little wall in the module and, you know, seeing if there’s a little bit of cool air that’s starting to rush out.

The best way for the astronauts to look for the leak is a little ultrasonic leak detector. That kind of spots frequencies that air might be rushing out. And that’s an indication of where there might be some airflow where there shouldn’t be. And it’s really just a matter of holding that leak detector up to sort of every little crevice and determining if things are, you know, not the way they should be.

Gideon Lichfield: So as I mentioned earlier, the space station always leaks a little bit. What made this one big enough to be worrying? 

Neel Patel: So..the.. you know, like I said before, the air pressure was dropping a little bit. That’s an indication that the hole is not stable, that there might be something wrong, that there could allegedly be some kind of cracks that had been growing.

And if that’s the case, it means that the hull of the spacecraft at that point is a little bit unstable. And if the leak is not taken care of as soon as possible, if the cracks are not repaired, as soon as possible, things could grow and grow and eventually reach a point where something might break. Now, that’s a pretty distant possibility, but you don’t take chances up in space. 

Gideon Lichfield: Right. And also you’re losing air and air is precious… 

Neel Patel: Right. And in this instance, there was enough air leaking that there started to be concerns from both the Russian and US sides that they may need to send in more oxygen sooner than later.

And, you know, the way space operations work, you have things planned over for years in advance. And of course, you know, you still have a leak to worry about. 

Gideon Lichfield: So how do leaks actually get started on something like the ISS?

Neel Patel: So that’s a good question. And there are a couple ways for this to happen. Back in 2018, there was a two millimeter hole found on the Russian Soyuz spacecraft. 

That was very worrisome and no one understood initially how that leak might’ve formed. Eventually it was determined that a drilling error during manufacturing probably caused it. That kind of leak was actually sort of good news because it meant that, with a drilling hole, things are stable. There aren’t any kind of like aberrant cracks that could, you know, get bigger and start to lead to a bigger destruction in the hull. So that was actually good news then, but other kinds of leaks are mostly thought to be caused by micro meteoroids. 

Things in space are flying around at. Over 20,000 miles per hour, which means even the tiniest little object, even the tiniest little grain or dust could you know, just whip a very massive hole inside the hull of the space station. 

Gideon Lichfield:  Ok so those are micro meteoroids that are probably causing those kinds of leaks, but obviously there’s also a growing problem of space debris. Bits of spacecraft and junk that we’ve been thrown up into orbit that is posing a threat. 

Neel Patel: Absolutely space debris is a problem. It’s only getting worse and worse with every year. Probably the biggest, most high profile, incident that caused the most space debris in history was the 2009 crash between two satellites, Iridium 33 and cosmos 2251. That was the first and only satellite crash between two operational satellites that we know of so far. And the problem with that crash is it ended up creating tons and tons of debris that were less than 10 centimeters in length. Now objects greater than 10 centimeters are tracked by the Air Force, but anything smaller than 10 centimeters is virtually undetectable so far. That means that, you know, any of these little objects that are under 10 centimeters, which is, you know, a lot of different things are threats to the ISS. And as I mentioned before at the speed that these things are running at, they could cause big destruction for the ISS or any other spacecraft in orbit. 

Gideon Lichfield: So it’s basically a gamble? Yeah? They’re just hoping that none of these bits crashes into it, because if it does, there’s nothing they can do to spot it or stop it. 

Neel Patel: No, our radar technologies are getting better. So we’re able to spot smaller and smaller objects, but this is still a huge problem that so many experts have been trying to raise alarms about.

And unfortunately, the sort of officials that be, that control, you know, how we manage the space environment still haven’t come to a consensus about what we want to do about this, what kind of standards we want to implement and how we can reduce the problem. 

Gideon Lichfield: So… They still haven’t found this leak. So what’s going on now?

Neel Patel: Okay. So according to a NASA spokesperson quote, there have been no significant updates on the leak since September 30th. Roscosmos, the Russian space agency, released information that further isolated the leak to the transfer chamber of the Svezda service module. The investigation is still ongoing and poses no immediate danger to the crew.  

Gideon Lichfield: All right, leaving Earth orbit for a bit. Let’s go to Mars. People have been looking for water on Mars for a long time, and you recently reported that there might be more liquid water on Mars than we originally thought. Tell us about this discovery. 

Neel Patel: So in 2018, a group of researchers used radar observations that were made by the European Space Agency’s Mars Express orbiter to determine that there was a giant, subsurface lake sitting 1.5 kilometers below the surface of Mars underneath the glaciers near the South pole. The lake is huge. It’s almost 20 kilometers long and is, you know, liquid water. We’re not talking about the frozen stuff that’s sitting on the surface. We’re talking about liquid water. Two years later, the researchers have come back to even more of that radar data. And what they found is that neighboring that body of water might be three other lakes. Also nearby, also sitting a kilometer underground.

Gideon Lichfield: So how does this water stay liquid? I mean Mars is pretty cold, especially around the poles. 

Neel Patel: So the answer is salt. It’s suspected that these bodies of waters have been able to exist in a liquid form for so long, despite the frigid temperatures, because they’re just caked in a lot of salt. Salts, as you might know, can significantly lower the freezing point of water. On Mars it’s thought that there might be calcium, magnesium, sodium, and other salt deposits.

These have been found around the globe and it’s probable that these salts are also existing inside the lakes. And that’s what allowed them to have stayed as liquid instead of a solid for so long. 

Gideon Lichfield: So what would it take to get to these underground lakes? If we could actually be on Mars and what might we find when we got there?

Neel Patel: These lakes, as I’ve mentioned, are sitting at least one kilometer sometimes further, deeper, underground. Uh, there’s not really a chance that any kind of future Martian explorers in the next generation or two are going to have the type of equipment that are gonna allow them to drill all the way that deep.

Which is not really a problem for these future colonists. There’s plenty of surface ice at the Martian poles that’s easier to harvest in case they want to create drinking water or, you know, turn that into hydrogen oxygen, rocket fuel. 

The important thing to think about is do these underground lakes perhaps possess Martian life. As we know on Earth, life can exist in some very extreme conditions and it’s, you know, at least a non zero chance that these lakes perhaps also possess the same sort of extreme microbes that can survive these kinds of frigid temperatures and salty environments.

Gideon Lichfield: Alright so maybe we don’t want to try to drink this water, but it would be great if we could explore it to find out if there is in fact life there. So is there any prospect that any current or future space mission could get to those leaks and find that out?

Neel Patel: No, not anytime soon. Drilling equipment is very big, very heavy. There’s no way you’re going to be able to properly fit something like that on a spacecraft. That’s going to Mars. But one way we might be able to study the lakes is by measuring the seismic activity around the South pole. 

If we were to place a small little Lander on the surface of Mars, have it drill just a little ways into the ground. It could measure the vibrations coming out of Mars. It could use those, use that data to characterize how big the lakes are, what their shape might be. And by extension, we may be able to use that data to determine, you know, how… in what locations of the lakes life might exist and, you know, figure out where we want to probe next for further study.

Gideon Lichfield: Technology has been an increasingly important part of political campaigns in the US, particularly since Barack Obama used micro-targeting and big data to transform the way that he campaigned. With every election since then, the techniques have gotten more and more sophisticated. And in her latest story for MIT technology review, Tate Ryan-Mosley looks at some of the ways in which the campaigns this time round are segmenting and targeting voters even more strategically than before. So Tate, can you guide us through what is new and improved and how things have changed since the 2016 election?

Tate Ryan-Mosley: Yeah. So I’ve identified kind of four key continuations of trends that have started and in prior presidential elections, and all, kind of all of the trends are pushing towards this kind of new era of campaigning where all of the messages, the positioning, the presentation of their candidates is really being, you know, personalized for each individual person in the United States. And so, the key things driving that are really, you know, data acquisition. So the amount of data that these campaigns have on every person in the United States. Another new thing is data exchanges which is kind of the structural mechanism by which all of this data is aggregated and shared and used.

And then the way that that data kind of gets pushed into the field and into strategy is of course microtargeting. And this year, you know, we’re seeing campaigns employ things with much more granularity, like using SMS as one of the main communication tools to reach prospective voters. Actually uploading lists of profile names into social media websites. And lastly, kind of a big shift in 2020 is a more clear move away from kind of the traditional opinion polling mechanisms into AI modeling. So instead of having, you know, these big polling companies call a bunch of people and try to get a sense of the pulse of the election, you’re really seeing AI being leveraged to predict the outcomes of elections and in particular segments.

Gideon Lichfield: So let’s break a couple of those things down. One of the areas that you talked about is data exchanges, and there’s a company that you write about in your story called Data Trust. Can you tell us a bit about who they are and what they do? 

Tate Ryan-Mosley: So data trust is the Republican’s kind of main data aggregation technology. And so what it enables them to do is collect data on all prospective voters, host that data, analyze the data, and actually share it with, politically aligned PACs, 501(c)(3)’s and 501(c)(4)’s. And previously because of FEC regulations, you’re not allowed to kind of cross that wall between campaign and 501(c)(3)’s, 501(c)(4)’s and PACs. And the way that these data exchanges are set up is it’s enabling data sharing between those groups. 

Gideon Lichfield: How does that not cross the wall?

Tate Ryan-Mosley: Right. So basically the, what they say is the data is anonymized to the point that you don’t know where the data is coming from. And that is kind of the way that they’ve been able to skirt the rules. The Democrats actually sued the Republicans after the 2016 election, and then they lost. And so what’s really notable is that this year the Democrats have created their own data exchange, which is called DDX. And so this is the first year that the Democrats will have any type of similar technology. And since the Democrats have come online, they’ve actually collected over 1 billion data points, which is a lot of data.

Gideon Lichfield: So these data exchanges allow basically a campaign and everyone that is aligned with it, supporting it, to share all the same data. And what is that enabling them to do that they couldn’t do before?

Tate Ryan-Mosley: Yeah,that’s a good question. And what it’s really doing is it’s kind of enabling a lot of efficiency and the way that voters are being reached. So there’s a lot of double spend on voters who are already decided. So for example, the Trump campaign might be reaching out to a particular, you know,  voter that has already been decided by a group like the NRA to be, you know, conservatively aligned and very likely to vote for Trump. But the Trump campaign doesn’t know that in their data set. So this would enable the Trump campaign to not spend money reaching out to that person. And it makes kind of the efficiency and the comprehensiveness of their outreach kind of next level.

Gideon Lichfield: So let’s talk about micro-targeting. The famous example of micro-targeting of course, is Cambridge Analytica, which illicitly acquired a bunch of people’s data from Facebook in the 2016 campaign, and then claimed that it could design really specific messages aimed at millions of American voters. And a lot of people, I think called that ability into question, right. But where are we now with microtargeting? 

Tate Ryan-Mosley: There’s kind of this misconception around the way in which microtargeting is impactful. What Cambridge Analytica claimed to do was use data about people’s opinions and personalities to profile them and create messages that were really likely to persuade a person about a specific issue at a particular time. And that’s kind of what’s been debunked. That, you know, political ads, political messages are not actually significantly more persuasive now than they’ve ever been. And really you can’t prove it. There’s no way to attribute a vote to a particular message or a particular ad campaign. 

Tate Ryan-Mosley: So what’s really become the consensus about, you know, why micro-targeting is important is that it increases the polarization of the electorate or the potential electorate. So basically it’s really good at identifying already decided voters and making them either more mobile. So you know, more vocal about their cause and their position or bringing them increasingly into the hard line and even getting them to donate. So we saw this pretty clearly with the Trump campaigns app that they have put out this year.

So there’s a lot of surveillance kind of built into the structure of the app that is meant to micro target their own supporters. and the reason they’re doing that is that’s kind of seen as the number one fundraising mechanism. If we can convince somebody who agrees with Trump to get really impassioned about Trump, that really means, that means money. 

Gideon Lichfield: Let’s talk about another thing, which is polling. Of course, the difficulty with polling that we saw in the 2016 election was people don’t answer their phones anymore and doing an accurate opinion poll is getting harder and harder. So how is technology helping with that problem? 

Tate Ryan-Mosley: So what’s being used is now AI modeling, which basically takes a bunch of data and spits out a prediction about how likely a person is either to show up to vote, to vote in a particular way, or to feel a certain way about a particular issue. and so these AI models they’re also used in 2016 and it’s worth noting in 2016, AI models were about as accurate as traditional opinion polls in terms of, you know, really not predicting that Trump was going to win. But you know, as the data richness gets better, as data gets more, you know, becomes more real time, as the quality improves, we’re seeing an increased accuracy in AI modeling that kind of is signifying. It’s likely to take, you know, more and more, become a bigger part of how polling is done. 

Gideon Lichfield: So what we’re seeing is that this election represents a new level in the use of technologies that we’ve seen over the past decade or more, that are us the ability, or giving campaigns the ability to target people ever more precisely to share data about people more widely and use it more efficiently. As well as to predict which way voters are going to go much more reliably. So what does all this add up to? What are the consequences for our politics? 

Tate Ryan-Mosley: What we’re really seeing as is kind of a fragmentation of campaign messaging and the ability to kind of scale those fragments and those silos up. And so what’s happening is it’s becoming significantly easier for campaigns to say different things, to different groups of people and that kind of skirts some of the norms that we have and in public opinion and civic discourse around lying around, you know, switching positions around distortion that have in the past really been able to check public figures.

Gideon Lichfield: Because politicians can say one thing to one group of people, a completely different thing to a different group. And the two groups don’t know that they’re being fed different messages. 

Tate Ryan-Mosley: Exactly. So, you know, the Biden campaign can easily send out a text message to a small group of, you know, 50 people in a swing county that say something really specific to their local politics. And most people wouldn’t ever know, or really be able to fact check them because they just don’t have access to the messages that campaigns are giving, you know, really specific groups of people.

And so that’s really kind of changing the way that we have civic discourse. And you know, it even allows some campaigns to kind of manufacture cleavages in the public. So it can actually kind of game out how they want to be viewed by a specific group of people and hit those messages home, you know, and kind of create cleavage that previously wasn’t there or wouldn’t be there organically. 

Gideon Lichfield: Does that mean that American politics is just set to become irretrievably fragmented? 

Tate Ryan-Mosley: I mean, that’s absolutely the concern. What’s interesting as I’ve talked to some experts that actually feel that this might indeed be the pinnacle of campaign technology and personalized campaigns because public opinion is really shifting on this. So Pew research group actually just did a survey that came out this month that showed that the majority of the American public does not think social media platforms should allow for any political advertisement at all.

And the large majority of Americans believe that political micro-targeting, especially on social media should be disallowed. And we’re starting to see that reflected in Congress. So there are a handful of bills actually that have bipartisan support that have been introduced to both the house and the Senate that are seeking to kind of address some of these issues. Obviously we won’t see the impact of that before the 2020 election, but a lot of experts are pretty hopeful that we’ll be able to see some legitimate regulation for the upcoming presidential in 2024. 

Gideon Lichfield: Tech companies are setting norms and standards of all kinds that used to be set by governments. That’s the view of Marietje Schaake, who wrote an essay for us recently. Marietje is a Dutch politician who used to be a member of the European parliament and is now international policy director at Stanford University’s Cyber Policy Center. Marietje, What’s a specific example of the way in which the decisions that tech companies have made end up effectively setting the norms for the rest of us?

Marietje Schaake: Well, I think a good example is how, for example, facial recognition systems and even the whole surveillance model of social media and search companies has set de facto norms compromising the right to privacy. I mean, if you look at how much data is collected across a number of services, the fact that there’s data brokers renders the rights of privacy very, very fragile, if not compromised as such. And so I think that is an example, especially if there’s no laws to begin with where the de facto standard is very, very hard to roll back once it’s set by the companies. 

Gideon Lichfield: Right. So how did we get to this?

Marietje Schaake: Yeah, that’s, that’s the billion dollar question. And I think we have to go back to the culture that went along with the rise of companies coming out of Silicon Valley that was essentially quite libertarian. And I think they, these companies, these, entrepreneurs, these innovators, may have had good intentions, may have hoped that their inventions and their businesses would have a liberating effect and they can lawmakers that the best support that they could give this liberating technology was to do nothing in the form of regulation. And effectively in the US and in the EU—even if the EU is often called a super regulator—there has been very, very little regulation to preserve core principles like non-discrimination or antitrust in light of the massive digital disruptions. And so the success of the libertarian culture from Silicon Valley, the power of big tech companies now that can lobby against regulatory proposals explains why we are where we are.

Gideon Lichfield: One of the things that you say in your essay is that there are actually two kinds of regulatory regimes in the world, for tech. There’s the privatized one, in other words, in Western countries the tech companies are really the ones setting a lot of the rules for how the digital space works. And then there’s an authoritarian one which is China, Russia, and other countries where governments are taking a very heavy handed approach to regulation. What are the consequences then of having a world in which it’s a choice between these two regimes? 

Marietje Schaake: I think the net result is that the resilience of democracy and actually the articulation of democratic values, the safeguarding of democratic values, the building of institutions has lagged behind. And this comes at a time where democracy is under pressure globally anyway. We can see it in our societies. We can see it on the global stage where in multilateral organizations, it is not a given that the democracies have the majority of votes or, or voices. And so all in all it makes democracy and projected out into the future, the democratic mark on the digital world, very fragile. And that’s why I think there’s reason for concern.

Gideon Lichfield: Okay. So in your essay, you’re proposing a solution to all of this, which is a kind of democratic alliance of nations to create rules for tech governance. Why is that necessary?

Marietje Schaake: Right. I think it’s necessary for democracies to work together much more effectively, and to step up their role in developing a democratic governance model of technology. And I think it’s necessary because with the growing power of. Corporations and their, uh, ability to set standards and effectively to govern the digital world on the one hand.

And then on the other hand, a much more top down control oriented state led model that we would see in States like China and Russia. There, there’s just too much of a vacuum on the part of democracies. And I think if they work together, they’re in the best position to handle cross border companies and to have an effective way of working together to make sure that they leverage their collective scale, essentially. 

Gideon Lichfield: Can you give an example of how this democratic coalition would work? What sorts of decisions might it take or where might it set rules? 

Marietje Schaake: Well, let me focus on one area that I think needs a lot of work and attention. And that is the question of how to interpret laws of war and armed conflict but also the preservation of peace and accountability after cyber attacks.

So right now, because there is a vacuum in the understanding of how laws of armed conflict and thresholds of war apply in the digital world, attacks happen every day. But often without consequences. And the notion of accountability, I think is very important as part of the rule of law to ensure that there is a sense of justice also in the digital world. And so I can very well imagine that in this space that really needs to be articulated and shaped now with institutions and mechanisms, then the democracies could, could really focus on that area of war, of peace, of accountability. 

Gideon Lichfield: So when you say an attack happens without consequences, you mean some nation state or some actor launches a cyber attack and nobody can agree that it should be treated as an act of war?

Marietje Schaake: Exactly. I think that that is happening far more often than people might realize. And in fact, because there is such a legal vacuum, it’s easy for attackers to sort of stay in a zone where they can almost anticipate that they will not face any consequences. And part of this is political. How willing are countries to come forward and point to a perpetrator. But it’s also that there’s currently a lack of proper investigation to ensure that there might be something like a trial, you know, a court of arbitration where different parties can, can speak about their side of the conflict and that there would be a ruling by an independent, judiciary-type of organization to make sure that there is an analysis of what happened but that there’s also consequences to clearly escalatory behavior. 

And if the lack of accountability continues, I fear that it will play into the hands of nations and their proxies. So the current lack of holding to account perpetrators that may launch cyber attacks to achieve their geopolitical political or even economic goals is very urgent. So I would imagine that a kind of tribunal or a mechanism of arbitration could really help close this accountability gap.

Gideon Lichfield: That’s it for this episode of Deep Tech. This is a podcast just for subscribers of MIT Technology Review, to bring alive the issues our journalists are thinking and writing about.

Before we go, I want to quickly tell you about EmTech MIT, which runs from October 19th through the 22nd. It’s our flagship annual conference on the most exciting trends in emerging technology. 

This year, it’s all about how we can build technology that meets the biggest challenges facing humanity, from climate change and racial inequality to pandemics and cybercrime. 

Our speakers include the CEOs of Salesforce and Alphabet X, the CTOs of Facebook and Twitter, the head of cybersecurity at the National Security Agency, the head of vaccine research at Eli Lilly, and many others. And because of the pandemic, it’s an online event, which means it’s both much cheaper than in previous years and much, much easier to get to.

You can find out more and reserve your spot by visiting EmTechMIT.com – that’s E-M…T-E-C-H…M-I-T dot com – and use the code DeepTech50 for $50 off your ticket. Again, that’s EmTechMIT.com with the discount code DeepTech50. 

Deep Tech is written and produced by Anthony Green and edited by Jennifer Strong and Michael Reilly. I’m Gideon Lichfield. Thanks for listening.

Read more

MIT Technology Review is about the biggest discoveries and ideas in emerging technology. We’re keen to hear from people with interesting, provocative, and well-argued opinions on technology and where it’s taking us.

In most cases, opinion pieces should be tied either to a recent news event (and by “recent” we usually mean “in the past day or two”) or to a topic that’s generally important and on the public’s radar. Timeless ideas might get through too, but they’ll need to be pretty original.

There should also be a compelling reason why you are writing it—usually, that you’re an expert in the field.

How to pitch an opinion piece

Opinion pieces can come in many forms, but most often their job is to make the public aware of an important problem and say what you, as an expert, think the solution should be.

We most commonly reject an idea for being either too general and obvious (“We need to do more about climate change” or “We need more open government data”) or too niche (it fails to explain why the broader public should care). It should be a problem specific enough to be interesting, yet relevant to non-specialists, and with a solution plausible enough to take seriously.

Generally speaking, it’s better to make one point very well than to argue several points at once. Decide what evidence you need to make your best case and include that information in your pitch. Don’t try to be comprehensive or exhaustive—just focus on presenting your strongest argument.

Email us your pitch at opinion@technologyreview.com. If you have a piece already written, you can send that. Otherwise, send us three or four paragraphs outlining your argument and why you’re qualified to make it. Include a one-sentence summary of your argument in bold that could serve as the headline. (If you can’t come up with one, your idea probably needs more work.)

The guide below should help you with crystallizing your pitch as well as writing the piece itself.

How to write an opinion piece

Try imagining you’ve just been introduced to some friends of friends who know nothing about the subject. How would you hook them and then keep them listening long enough to make your point? What’s obvious to you and not obvious to them? What’s plain language to you and jargon to them? What’s interesting to you but unnecessary detail for them? Imagine that conversation. Then write as closely as possible to the way you’d speak.

Here’s a suggested structure. You don’t have to follow this, but it may help organize your thinking.

Begin by describing the problem. Minimize preamble. If the topic is well known, get straight to the point: “More and more people are saying Facebook needs to be broken up.” If not, consider starting with an anecdote. It’s easier for readers to get a mental grip on a story that exemplifies the problem than an abstract statement of it.

Next, give some context to the problem. First of all, unless it’s a very well-known one, why should our readers care about it? How does it affect them and the people around them?

Second, unless it’s a completely new problem, why are we reading about it now? Has it become more acute or urgent, or changed in some other way? Are there potential solutions now that weren’t possible before, thanks to new technology or a political or economic shift?

Third, if there have been previous attempts to solve the problem, why have they failed? Or if other solutions are currently being tried or proposed, why won’t they work?

You’re now ready to present your solution to the problem. A common pitfall here, though, is to not be specific enough. Avoid vague terms (“digitization” or “optimize”) that leave a reader wondering what, exactly, you mean. Provide enough detail for a reader to understand what needs to be done and who should do it.

Next, anticipate objections to the solution. Perhaps some of the steps you’re proposing seem easy or obvious; if so, explain why they’re harder than they appear or what’s been preventing them up to now. Alternatively, they may seem naïve or overly ambitious; if so, explain why you think they can be achieved nonetheless.

Don’t shy away from difficulties here. Tackle the biggest ones head-on, and if necessary, concede that something really big would need to change for your proposal to work. It’s no crime to be an idealist, as long you’re a realist about your idealism. Maybe there’s also a more practical compromise; mention that too, if only to highlight how much less satisfying it would be.

Finally, a clear, snappy ending is essential. An argument that peters out or ends in platitudes loses much of its impact. This can sometimes be the hardest part to write. Here are a few approaches.

One is to re-emphasize what’s at stake and describe the consequences if the problem isn’t solved. That alone can be dispiriting, though. More inspiring is to say why the problem is in fact more tractable than it seems—as long as you believe that, of course. Or, if your argument is that a particular person or organization has a clear responsibility to act, you might end by challenging them to do it.

Regardless of which approach you take, it’s a bonus if you can also give your readers a way to act themselves. Can they change a personal habit, put pressure on a decision-maker, or do something else to influence the outcome? Too many opinion pieces leave the reader feeling helpless. Ultimately, the point of your writing this piece is to argue that the world can be made better. That will be true only if people believe it.

Standards and guidelines

As you write, watch out for long sentences and paragraphs. They’re harder to comprehend than shorter ones. To avoid wearing readers out, break lengthy passages into shorter sentences or paragraphs where possible.

Most opinion pieces we publish are between 800 and 1,000 words. It’s best to keep your first draft as close to that length as you can. Provide evidence to support your point of view and use hyperlinks (not footnotes) to cite your sources.

We don’t accept opinion pieces that promote a product, company, or service. If you want to do that, please contact our sponsored content team at insights@technologyreview.com.

We also won’t publish pieces denying climate change. We don’t have many red lines, but this is one of them.

We generally pay only people who make their primary living from writing. If you want payment, let us know when you pitch. Either way, we’ll ask you to sign a contract that gives us exclusive publication rights for an initial period. After that you’ll be free to republish it elsewhere.

We will work with you to edit the piece into shape, but we reserve the right to reject it if we think it will take too much effort to get it there.

You must tell us of any relevant vested interests or conflicts. These probably won’t disqualify you, but they’ll need to be disclosed with the piece. Failure to disclose something that later comes to light will reflect very badly on you and on us.

Read more

2020 has created more than a brave new world. It’s a world of opportunity rapidly pressuring organizations of all sizes to rapidly adopt technology to not just survive, but to thrive. And Andrew Dugan, chief technology officer at Lumen Technologies, sees proof in the company’s own customer base, where “those organizations fared the best throughout covid were the ones that were prepared with their digital transformation.” And that’s been a common story this year. A 2018 McKinsey survey showed that well before the pandemic 92% of company leaders believed “their business model would not remain economically viable through digitization.” This astounding statistic shows the necessity for organizations to start deploying new technologies, not just for the coming year, but for the coming Fourth Industrial Revolution.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

Lumen plans to play a key role in this preparation and execution: “We see the Fourth Industrial Revolution really transforming daily life … And it’s really driven by that availability and ubiquity of those smart devices.” With the rapid evolution of smaller chips and devices, acquiring analyzing, and acting on the data becomes a critical priority for every company. But organizations must be prepared for this increasing onslaught of data.

As Dugan says, “One of the key things that we see with the Fourth Industrial Revolution is that enterprises are taking advantage of the data that’s available out there.” And to do that, companies need to do business in a new way. Specifically, “One is change the way that they address hiring. You need a new skill set, you need data scientists, your world is going to be more driven by software. You’re going to have to take advantage of new technologies.” This mandate means that organizations will also need to prepare their technology systems, and that’s where Lumen helps “build the organizational competencies and provide them the infrastructure, whether that’s network, edge compute, data analytics tools,” continues Dugan. The goal is to use software to gain insights, which will improve business.

When it comes to next-generation apps and devices, edge compute—the ability to process data in real time at the edge of a network (think a handheld device) without sending it back to the cloud to be processed—has to be the focus. Dugan explains: “When a robot senses something and sends that sensor data back to the application, which may be on-site, it may be in some edge compute location, the speed at which that data can be collected, transported to the application, analyzed, and a response generated, directly affects the speed at which that device can operate.” This data must be analyzed and acted on in real time to be useful to the organization. Think about it, continued Dugan, “When you’re controlling something like an energy grid, similar thing. You want to be able to detect something and react to it in near real time.” Edge compute is the function that allows organizations to enter the Fourth Industrial Revolution, and this is the new reality. “We’re moving from that hype stage into reality and making it available for our customers,” Dugan notes. “And that’s exciting when you see something become real like this.”

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Lumen Technologies.

Show notes and links

“Emerging Technologies And The Lumen Platform,” by Andrew Dugan, Automation.com, September 14, 2020

“The Fourth Industrial Revolution: what it means, how to respond,” by Klaus Schwab, The World Economic Forum, January 14, 2016

“Why digital strategies fail,” by Jacques Bughin, Tanguy Catlin, Martin Hirt, and Paul Willmott, McKinsey Quarterly, January 25, 2018

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is building a connected platform for the Fourth Industrial Revolution, which, granted, is a concept that is still being refined in practice, but is undoubtedly here, as data, artificial intelligence, network performance, and devices come together to better serve humans. Two words for you: next-generation apps.

My guest is Andrew Dugan, who is the chief technology officer for Lumen. He has more than 30 years of experience in the telecommunications industry and, unsurprisingly for his time as an engineer, more than 20 patents filed. Andrew, welcome to Business Lab.

Andrew Dugan: Thanks Laurel. I’m very happy to be here.

Laurel: So, launching a new company during a pandemic may not be the most ideal situation, but a great opportunity to rise to the occasion. How has the covid-19 pandemic helped Lumen prepare for, perhaps unexpected, customer needs?

Andrew: Well, covid has been difficult. It’s certainly had a terrible impact on the world, but one of the positive parts of it is that I’ve been really pleasantly surprised at how our team has responded and how our customers have responded. And covid gave us a really good opportunity to show how our infrastructure and our services are scalable by being able to turn up emergency bandwidth for our customers in a record time, surprisingly quick. Covid has also had a measurable increase in our customers’ understanding of how important digital capabilities are because those organizations that fared the best throughout covid were the ones that were prepared with their digital transformation.

We’ve watched how our customers’ needs have changed throughout covid. Early on, we did surveys and found the early concerns were around supply chain. “Will I be able to get the things that I need to be able to continue to run my business? Will I be able to keep my employees safe?” And we’ve seen a shift towards more of the digital concerns. “Is my new way of operating secure? Do I have the right type of security measures in place? Do I have the right type of network for my remote employees or maybe for my customers to be able to consume my services?” A lot of businesses are looking forward and saying, “How do I create new forms of revenue in this covid world?” And so they’re looking at technology to help them with that. And we’re finding that the services that we have available at Lumen can really help them with that need. So, it’s been a difficult time, but also one that’s exciting from a technology perspective.

Laurel: It has that, hasn’t it? We interviewed the CIO at Boston Children’s Hospital and he said that in the early days of covid telehealth visits skyrocketed from 20 visits a day to 2,000. Obviously, there’s been a bit of a decrease as patients returned to in person visits, but clearly this is a huge disruption to the way that things were done. What opportunities during this time of great global disruption do you think could be actually accelerated?

Andrew: As I mentioned, I think businesses have really recognized the power of digital capabilities in today’s world. And I think covid has helped accelerate a lot of businesses in that digital transformation. The longer-term cultural changes that I think will result here, those usually take generations to occur. And when you’re forced into an environment like covid has put us into, it can help accelerate some of those changes. Whether it’s more work from home, the way that health care is provided through more virtual and online services, the way that people market and sell their services. Who would have thought that the number of home sales or cars that were sold through virtual visits would be a normal way of doing things? Also, the way that people interact. From my own personal experience, I’ve done more social interaction through game nights online. I even did an online wine tasting myself with my family and it was quite fun. So, I think we will see continued evolution of products and services, new revenue streams for companies as they embrace the possibilities of what technology can bring to them.

Laurel: Do you have any examples of what you’re hearing from your customers? Just kind of those, “Oh, we didn’t know we could do X, but now we can and maybe it’ll work out.” Just those off-handed conversations that sometimes you have.

Andrew: Well, I think a lot of our customers were surprised at how quickly they were able to transform to a remote work environment. So, they were able to move the majority of their workforces home with little or no disruption to their business. We certainly found that in our business. So I think that was one thing that was surprising for our customers was the usefulness of online learning. I’m not sure that many people before this would have expected that we could support this level of online learning or online healthcare. So I think those sorts of things, many people did find surprising at how quickly and how ready the technology was to support them.

Laurel: Yeah, to be able to do that, whether it’s education or telehealth, a complex and fast edge network needs to be built in most places, right? And expanded in others. So when you think of these complexities, how do companies best handle their plans for not just the edge, but also growing data infrastructure that’s needed to support all of these services?

Andrew: One of the key things that we see with the Fourth Industrial Revolution is that enterprises are taking advantage of the data that’s available out there. There’s a lot more data being generated through things like IoT and smart devices, and the way that enterprises, I think, get to take advantage of those is they are going to have to do a couple things. One is change the way that they address hiring. You need a new skill set, you need data scientists, your world is going to be more driven by software. You’re going to have to take advantage of new technologies. Edge compute is one of those that’s emerging and becoming more available. And they’re going to have to learn how to build that into their applications and their processes. And they’re going to have to look at how the data can make them more efficient, what sort of new revenue streams they can create. So, those are going to be challenges that they may not have faced before. They may not have had to learn how to use AI and machine learning tools. But I think that those will become more critical as the Fourth Industrial Revolution develops for enterprises to be successful.

Laurel: And that’s one of those things where if the old saying is true, that if every company is a technology company, then the technology demands today have advanced pretty greatly, pretty quickly, especially in the face of covid, but in general as devices get smaller and faster and edge compute becomes more real.

Andrew: Yeah, I think that statement is really true that every company is a technology company. I’ve got a family member that owns hair salon business, and you wouldn’t think that that’s a technology company, but how you interact with your customers, you need to have a digital presence. You need to have digital tools that may be less data-driven, but over time will become more data-driven. So, I think you’re absolutely right, that almost all businesses are becoming technology businesses to some extent.

Laurel: Especially with AI and ML [machine learning]. You add this all together with edge compute, AI, better devices, faster devices [and you have something new]. So, the World Economic Forum says the Fourth Industrial Revolution isn’t just accelerating but exponentially advancing technological breakthroughs. How specifically does Lumen, or do you, define the Fourth Industrial Revolution?

Andrew: We see the Fourth Industrial Revolution really transforming daily life, not just people’s personal life, but organizations, as we talked about enterprises are becoming technology companies. And it’s really driven by that availability and ubiquity of those smart devices. Those smart devices are generating data, and enterprises and businesses, their ability to be successful is really being driven by their ability to acquire, analyze, and act on the data coming from those smart devices, to be able to improve their products and services, improve their outcomes as a business and differentiate themselves from competitors. And for us at Lumen, it’s about how do we enable those businesses to use that data and help them build the organizational competencies and provide them the infrastructure, whether that’s network, edge compute, data analytics tools, to help them implement insights using software to improve their business.

Laurel: So, thinking about that acquire, analyze, act on the data, what are some of those challenges that enterprises have with data and processing it?

Andrew: One of the biggest challenges as this transformation occurs, and as it’s centered around that data, it really does come back to that skill set. If your business is being driven by the data, you have to have the people that are able to understand that data and extract value from it. And that’s data science, and more businesses are going to require a data scientists, that skill set to be able to acquire, analyze, and figure out how to act on that data. That’s going to be driven by software, so I think there will be an increasing need for those software skill sets. Those are certainly challenges that they’re going to face. They’re also going to face technology challenges. How do you deal with the new architectures that are going to be required, whether that’s edge compute or more of the AI machine-learning technologies, to be able to deal with all of that data and extract that value. And then how does that affect their processes? A lot of times their processes today aren’t built around data. Those processes can be too slow. Data provides them a real opportunity to improve that efficiency, improve the speed, give them more of an ability to make real-time decisions as they automate the analysis of that data. So, having skills for things like robotic process automation across the organization to help take advantage of that, I think are going to be important, too. So, improving their people’s skill set, how they take advantage of technology, and how that affects their process are all going to be challenges that they have to deal with.

Laurel: That’s an excellent point. It’s not just one thing, is it? You really do have to improve the entire system down the line. And the focus on some companies may be hiring. And then on some other companies may be those apps and solutions and deployment because they have the infrastructure already built. As we know, the data has come out, and the companies that have done better during this time are ones that have already started or are in process with their digital transformation. So what specifically are some of those characteristics you can see forward-looking companies or companies who have started their digital transformation or in the process of it? What kind of technologies and thinking are they using and deploying?

Andrew: Yeah, I think that varies by industry. We talk to a lot of larger enterprises. People who are building smart factories as an example, and they’re dealing with, how do they make better use of robotics? How do they build that infrastructure? How do they run that infrastructure? How do they make it more secure? We see other enterprises out there that are looking to collect information about how their services are used, what their customers want to do with it and collecting that data and trying to figure out how to use AI and machine learning to better predict what their customers will need. So, it really varies by industry, but it’s the software tool sets that are out there to help them solve their business problems through data, but also the infrastructure that they’re going to need to be able to run things like smart factories with robots that are connected through wireless technologies. Feeding data back through sensors to their applications, which may not be located on-site. How do you run and operate those applications? How do you connect it all together and make it work seamlessly? Those are some of the things we’re seeing.

Laurel: And it’s a very complex issue for sure. So, speaking of robots, there’s always this discussion about automation in the work that robots can do instead of people, specifically those “tedious tasks,” that allow humans to do more creative work. What kind of opportunities do you see with robotics and automation?

Andrew: Oh, I see quite a bit. That’s a way for businesses to become more efficient, produce a better quality product, have a safer environment. Going back to that smart factory example, we’re talking with customers who are trying to figure out, how do they take advantage of the advancements in robotics and how do they build out the infrastructure? One of things that we found is that customers need help with deploying and managing those applications. They need help with the connectivity of those robots, to the network. They need to ensure that the infrastructure that’s supporting them can support the real-time processing. That’s so important in these robotics applications and looking for somebody who can help them design these solutions end-to-end from their enterprise locations where the factory is through the edge to the centralized cloud is something that we’re in a good position to help them with and has been a more recurring conversation as those enterprises try to figure out how to take advantage of the automation that robotics provides.

Laurel: Yeah, speaking of that competitive advantage, where are you seeing it? Smart factories and those edge devices? Are there any unexpected places that you’re starting to see that advantage come through?

Andrew: Yes. There are. There are some things that I think are less obvious. One of our customers is a retail food chain, and you wouldn’t think that these technologies and the applications, the processing of data would be as important as it is. When you drive up to a restaurant, you want to go through the drive-through and get something. And you see the line wrapping around the building. There are certain restaurants where you look at that and you say, “Oh, that line is going to take me too long, but there are other restaurants where you look at it,” you say, “Yeah, that line does wrap around the building, but I know from my experience that I can get through that line in just a few minutes.” The fact that those restaurants run an efficient line like that, it’s not by accident, it’s not by necessarily just hard work with the employees, although they do work hard. It’s because the applications that they’re using have created a more efficient operation, whether that’s automation of the food preparation inside, how they collect the orders from customers, how they process the orders, the process that it allows them to operate as a business. So, it is affecting every parts of the business. Even those that you wouldn’t think are highly dependent upon data, highly dependent upon applications, like a retail food establishment. Their business success is becoming increasingly more dependent on the things that are enabled by the Fourth Industrial Revolution.

Laurel: That’s really interesting because when you think about just that one example, there are so many edges there, right? And that doesn’t even go into supply chain and efficiency across the entire retail chain, across a certain geographic area. When we think about this kind of real-time response rate, yes we have this example in a retail food chain, but why is it so important? Why is real-time processing that key component to the Fourth Industrial Revolution?

Andrew: I think there’s a couple of reasons why. One is that the lifetime of data in many cases has a very short useful life. And whether it’s that robotics example or other examples like smart energy grids, you’ve got sensors out there. Those sensors are collecting information. The applications that are being written to react to those sensors are being written for real-time response. Whether it’s in going back to the robotics example. When a robot sensors something and sends that sensor data back to the application, which may be on-site, it may be in some edge compute location, the speed at which that data can be collected, transported to the application, analyzed, and a response generated, directly affects the speed at which that device can operate. And so the ability to manage that data process, that data in real time is critical for those types of applications. When you’re controlling something like an energy grid, similar thing. You want to be able to detect something and react to it in near real time. Other examples of safety examples, where you’ve got video processing managing the movement of something around a campus. The ability to see something in the camera sense it, detect ,and react to it is critical for safety. So we’re seeing a lot of applications that their dependency on fast processing of data is becoming very important to them.

Another reason for real time is the amount of data being generated out there is just huge. And that data is moving quickly and you don’t have necessarily to store it over a long period of time. And as that data is coming in, you want to be able to process it as quickly as you can, extract whatever value you can out of it, and then dispose of that data. And so you don’t want to get behind in that processing and the ability to handle it in real time is also important.

Laurel: Yeah. Kind of focusing on that sense, detect, and react that of course has a lot to do with the security as well. So the attack surface of what enterprises are looking at now is growing, right? So it’s every device, every network connection, every point. How is security tackled and how is this a priority for businesses?

Andrew: Yeah, this is a really interesting problem, I think. Years ago, an enterprise would build a private network and they would protect it largely with perimeter based security. You make sure that data or people getting into that network are the people and data that you want there. And you could protect a lot using a perimeter model like that. As applications distribute, as they become available on the public internet, that perimeter based security is not the only thing that you can rely on. You have to think about security at every layer. And the layers that I think you have to worry about today is your network.

One, operating system, application security and your data security. From a network perspective, you want to ensure that you’re operating on a network that is inherently secure. One of the things that we do at Lumen to help with that is we have a group that we call Black Lotus Labs. It’s a research group inside the company and their job is to analyze data available through the internet. Through analyzing internet traffic patterns and detecting malicious actors out there, and then build that protection into our networking and enterprise security products. By doing that, we can make the network inherently more secure at the operating system level and application level. You need to make sure that you’re continually patching. That you’re understanding what exposures might exist in that operating system that’s running your applications and the applications themselves. And ensuring that you’re continuing to close any gaps that are found. And as data becomes more available, as we’re extracting more and more valuable information about our customers and users using that data analytics, data privacy and security are becoming even more important. And so, use of data encryption where appropriate, ensuring that you have the right data security and controls in place is also critically important. So yeah, we’ve changed quite a bit from a perimeter model to one where you need to think about it at every layer of the network and layer of your application.

Laurel: And that makes sense as everything becomes much more integrated and like you said, the data at every layer demands that sort of response. So when I’m thinking about customers, that’s a broad category. And Lumen obviously is a bit behind the scenes to their customers’ customers, but still very important. You need to care about how everyone is using the network devices. And how do you instill that curiosity into your organization where you look out and you are responsible for the experiences of many different people and many different applications. And it’s hard to, I guess, sometimes square what a smart factory does with a food retail outlet, but at the same time, you’re still reliably giving them that network connectivity securely, quickly to allow them to do what they need to do.

Andrew: Well, I think you hit on it there. Even though it’s our customers’ customers that have a lot of the experience that we’re trying to drive, we really do have a direct effect on that. As you outlined, it’s the network experience. We provide a lot of the underlying infrastructure and the performance of our network directly affects those end customers’ experience. So, that’s really important. How secure we make our network, how secure we make our infrastructure also directly affects those end customers. So, we try to instill in our employees, in our products and services, that recognition that we are here to create a great customer experience for our customers and indirectly to their customers. And I think we do a good job of that. I think everybody recognizes how critical the services are that we perform and provide and that our customers rely on us.

Laurel: Absolutely. So one last question, as an engineer yourself, we’ve touched on so many different aspects and we could easily talk for days about certain parts of this conversation, especially security, but what are you most excited about or curious and what gets you just really happy to read the news, to get going, to do the hard work that really helps companies do those amazing things?

Andrew: Well, I get excited about technology being an engineer. There’s so much that we can help our customers do to improve their businesses but improve society overall. I look at that technology as being a real tool that we can make available to our customers to make things better. And it’s really fun for me to be involved in the development of the technologies that empower them to take advantage of this Fourth Industrial Revolution. One of the ones that gets me up on a daily basis recently is the developments around edge and edge compute and supporting these applications that are becoming more performance sensitive. How do we build and manage the infrastructure that lets those applications operate with a high degree of performance so that they can provide that real-time feedback to our customers and real time improvement? So, it’s pretty exciting that the edge compute part of what we’re building is relatively new. The conversation’s been around in the industry for a couple of years, but it’s now becoming real and we’re moving from that hype stage into reality and making it available for our customers. And that’s exciting when you see something become real like this.

Laurel: It is. Anything to get away from the hype and into the reality. Andrew, thank you so much for joining me today in what has been just a fantastic conversation on the Business Lab.

Andrew: Thank you very much. Enjoyed it.

Laurel: That was Andrew Dugan, who is the chief technology officer for Lumen, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web and at dozens of events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read more

BepiColombo, a Mercury-bound mission jointly run by the European Space Agency and the Japan Aerospace Exploration Agency (JAXA), is snapping up a wealth of new images and collecting some new data that may tease out new clues about the Venusian atmosphere—and whether it could be home to extraterrestrial life.

What happened: On Thursday morning, as part of a long journey to Mercury, BepiColombo made a close pass of Venus at a distance of about 6,660 miles. The flyby is meant to use Venus’s gravity as a speed-reducing force to adjust the trajectory of the spacecraft on to its eventual destination. 

Hype of life: Although the flyby was planned for maneuvering purposes, it afforded scientists an opportunity for a closer look at Venus. The interest around the flyby is bigger since last month’s revelations that Venus’s clouds contain phosphine, a possible sign that there is biological activity on the planet. If the phosphine is there, then there’s a good chance it’s a result of biology, and that means life might be residing within the thick, carbon-rich atmosphere. However, it’s also possible those traces of phosphine might be the result of exotic natural chemistry not found on Earth. Still cool, but not aliens.

What did the mission actually observe? Most of BepiColombo’s instruments are still stored away until the rendezvous with Mercury—including its primary camera. Those that are functional at the moment (10 in total) are still designed primarily for studying the atmosphere-less Mercury. But there are still some bits of data the spacecraft collected that may be useful. 

bepicolombo venus flyby
A sequence of images taken during BepiColombo’s flyby of Venus on October 15.
ESA/BEPICOLOMBO/MTM

Two smaller cameras facing the spacecraft itself are turned on, and they managed to take several photos of Venus (obscured a bit by the probe’s magnetometer and antenna). An onboard spectrometer (which measures emissions of electromagnetic wavelengths to unravel the chemistry of other objects) took over 100,000 spectral images of the Venusian atmosphere. Other instruments studied the planet’s temperature and density as well as its magnetic environment and how it interacts with solar winds. 

Don’t hold your breath: It’s unlikely that the spectrometer and other activated instruments were able to study phosphine molecules on Venus during this flyby. But they might be able to hint at the presence of other biosignatures that could bolster evidence for possible life on Venus. 

Moreover, this first flyby of Venus could be thought of as a practice run for a second one BepiColombo will make in August 2021. Now that the mission team has a better sense of how to better calibrate these instruments to study Venus more closely, they’ll have a better opportunity to do some better data collection next year, when the distance will shrink down to just 340 miles. The chances of detecting phosphine on that flyby are still slim, but not zero. And traces of other biosignatures could be spotted too.

And what about Mercury? The mission will make its first flyby of Mercury the following October. The three separate spacecraft that make up BepiColombo will separate completely when the mission enters Mercury’s orbit in 2025.

Read more
1 2,442 2,443 2,444 2,445 2,446 2,479