Ice Lounge Media

Ice Lounge Media

When the 2020 coronavirus pandemic forced workers across the United States to stop congregating in offices and work from home, Siemens USA was prepared to protect its newly remote workforce and identify and repel potential data breaches. It turned to AIOps—artificial intelligence for IT operations—and a specialized security system to immediately secure and monitor 95% of its 400,000 PCs, laptops, mobile devices, and other interfaces used by employees regardless of where they were using them.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

“The underlying driver in this context is speed,” says Adeeb Mahmood, senior director of cybersecurity operations for Siemens USA in Washington, DC. “The faster we are able to detect and prevent threats to our devices and critical data, the better protected our company is.” 

Siemens USA, a manufacturer of industrial and health-care equipment, uses AIOps through its endpoint detection and response system that incorporates machine learning, the subset of AI that enables systems to learn and improve. The system gathers data from endpoints—hardware devices such as laptops and PCs—and then analyzes the data to reveal potential threats. The organization’s overall cybersecurity approach also uses data analytics, which allows it to quickly and efficiently parse through numerous log sources. The technology “provides our security analysts with actionable outputs and enables us to remain current with threats and indicators of compromise,” Mahmood says.

AIOps is a broad category of tools and components that uses AI and analytics to automate common IT operational processes, detect and resolve problems, and prevent costly outages. Machine-learning algorithms monitor across systems, learning as they go how systems perform, and detect problems and anomalies. Now, as adoption of AIOps platforms gains momentum, industry observers say IT decision-makers will increasingly use the technology to bolster cybersecurity—like Siemens, in integration with other security tools—and guard against a multitude of threats. This is happening against a backdrop of mounting complexity in organizations’ application environments, spanning public and private cloud deployments, and their perennial need to scale up or down in response to business demand. Further, the massive migration of employees to their home offices in an effort to curb the deadly pandemic amounts to an exponential increase in the number of edge-computing devices, all which require protection.

A May report from Global Industry Analysts predicts the AIOps platform market worldwide will grow by an estimated $18 billion this year, driven by a compounded growth rate of 37%.1 It also projects that AIOps initiatives—particularly among big corporations—will span the entire corporate ecosystem, from on-premises to public, private, and hybrid clouds to the network edge, where resources and IT staff are scarce. Most recently, a well-documented rise in data breaches, particularly during the pandemic, has underscored the need to deliver strong, embedded security with AIOps platforms.

Faster than a speeding human

Cybersecurity affects every aspect of business and IT operations. The sheer number of near-daily breaches makes it difficult—if not impossible—for organizations, IT departments, and security professionals to cope. In the last year, 43% of companies worldwide reported multiple successful or attempted data breaches, according to an October 2019 survey conducted by KnowBe4, a security awareness training company.2 Nearly two-thirds of respondents worry their organizations may fall victim to a targeted attack in the next 12 months, and today concern is further fueled by the growing number of cybercrimes amid disarray caused by the pandemic. Organizations need to use every technological means at their disposal to thwart hackers.

The strongest AIOps platforms can help organizations proactively identify, isolate, and respond to security issues, helping teams assess the relative impact on the business. They can determine, for example, whether a potential problem is ransomware, which infiltrates computer systems and shuts down access to critical data. Or they can ferret out threats with longer-term effects, such as leaking customer data and in turn causing massive reputational damage. That’s because AIOps platforms have full visibility into an organization’s data, spanning traditional departmental silos. They apply analytics and AI to the data to determine the typical behavior of an organization’s systems. Once they have that “baseline state,” the platforms do continual reassessments of the network—and all wired and wireless devices communicating on it—and zero in on outlier signals. If they’re suspicious—exceeding a threshold defined by AI—an alert is sent to IT security staffers detailing the threat, the degree to which it could disrupt the business, and the steps they need to take to eliminate it.

Download the full report.

Read more

In the middle of the night on Monday, the two cosmonauts and one astronaut on the International Space Station were woken up by a call from mission control. They were told that there was a hole in a module on the Russian side of the station, responsible for leaking precious air out of the $150-billion spacecraft and into the vacuum of space. They were now being tasked to hunt for the precise location of the leak and see if they could patch it up, as the leak had seemed to have grown alarmingly bigger (an erroneous reading later attributed to a temperature change in the cabin). And that was actually the good news. 

The ISS has been dealing with the air leak for over a year. First discovered in September 2019 when NASA and its partners observed a slight dip in air pressure, the problem has never posed a threat to crews on board. It was only in August, after ground crews noticed the leak was getting worse, that an investigation was launched to finally find the source and remedy the problem. 

Since then, American astronaut Chris Cassidy and Russian cosmonauts Anatoly Ivanishin and Ivan Vagner have spent multiple weekends hunkered in a single module while they close the rest of the station’s hatches and make measurements of the air pressure changes in the other modules. After several of these weekend astronaut slumber parties, mission control determined the location of the leak was the Zvezda module (which provides life support to the Russian side of the station), leading to Monday night’s search party. 

The ISS always loses a tiny bit of air, and that simply requires replacing the nitrogen and oxygen tanks during regular resupply missions. But the fact that the leak was getting worse would require the tanks to be replaced sooner than expected. It also means the hole that’s allowing the leak may have gotten bigger, and could still grow if not dealt with soon.

“These leaks are predictable,” Sergei Krikalyov, the executive director of Russia’s crewed space program, said in televised comments. “What’s happening now is more than the standard leakage and naturally if it lasts a long time, it will require supplies of extra air to the station.”

To find the exact location of the leak in Zvezda so it can be repaired, Cassidy and his crewmates will have to spend some time floating around the module with a handheld device called an ultrasonic leak detector, which spots frequencies that are emitted by airflow as it rushes out small holes and cracks. Noise on the station can make it more difficult to detect these frequencies, and the crew may have to run through areas a few times to actually find the source. One company wants to improve on this strategy by deploying an automated robot that can “listen” for leaks and identify them in real time, without the need of a human hand. Once they have found the source of the leak, they will patch it up with a kit using epoxy resin.

Leaks can also occur in other ways besides a loss of oxygen. The ISS has previously dealt with ammonia leaks coming from the station’s cooling loops. Since ammonia is toxic to humans, such leaks require immediate action, involving lengthy spacewalks to identify holes in the coolant system and repair them. 

The ongoing issue goes to show that even a spacecraft as well designed and protected as the ISS is not invulnerable. And as we see more countries and companies send humans on crewed missions into orbit, such leaks will be a much more common occurrence. Not every spacecraft will be as resistant to the problems as the ISS.

There are a couple of major culprits for how a leak forms on a spacecraft. The most high-profile ISS leak in recent memory was found in August 2018—a 2-millimeter hole on a Russian Soyuz spacecraft docked to the station at the time. That hole appears to have been the result of a drilling error made during manufacturing (although Russia’s space agency has been cagey about exactly what caused it). The mystery of that leak was great fodder for conspiracy theorists, but the fact that the hole was accidentally made by a drill was lucky. A hole like that is clean and precise, and not very susceptible to cracks or expansion. 

But when the ISS springs a leak without a clear cause, the major suspect is a haphazard collision with a micrometeoroid or small piece of debris (some just millimeters or less in size). Objects in Earth’s orbit zip around at extremely high speeds. The International Space Station, for example, has an average speed of 7.66 kilometers per second, or over 17,000 mph. Some micrometeoroids in space whiz through at over 20,000 mph. At those ultra-high speeds, even tiny objects that are smaller than a centimeter can absolutely shred larger objects, like a bullet from a gun. That sort of messy destruction can leave behind cracks or structural damage that propagates through the rest of the spacecraft hull or pierce through the ammonia coolant system. 

iss detect leaks
A view of the wireless ultrasonic leak detector aboard the International Space Station.
NASA/SHANE KIMBROUGH/JSC

Pressurized spacecraft, usually designed for human habitation, are more vulnerable to these problems, since the internal pressure is putting added stress on the spacecraft hull. “Cracks are more vulnerable to added stressors,” says Igor Telichev, an engineer at the University of Manitoba in Canada and an expert in spacecraft collisions with debris. “A hole, even a large one, is of course bad, but a crack could start propagating throughout the structure and threaten its entire integrity.” 

Engineers try to design spacecraft with shields that can withstand certain collisions from micrometeoroids and small bits of space debris. For the ISS, they used something called a Whipple shield (named after its inventor, the late Harvard astronomer Fred Whipple). It’s a thin outer bumper that’s spaced some distances away from the main wall of the spacecraft. The bumper doesn’t outright stop incoming micrometeoroids or other small debris, but instead breaks these pieces up into a cloud of small particles that fan out over a large area and pose less of a risk. For the wall, it’s the difference between facing a single large bullet and a smattering of birdshot. 

There are a number of different variants on the Whipple shield—some, for example, are augmented with Kevlar or ceramic filling between layers. The ISS itself has over 100 different Whipple shield configurations, as some areas are more vulnerable to micrometeoroid collisions than others. 

But as evidenced by the station’s history with micrometeoroid impacts, Whipple shields aren’t foolproof. Future crew vehicles and space stations that will be made for much less than the ISS will likely be more vulnerable to leaks caused by collisions with small debris and particles. 

When it was first being built 20 years ago, few experts anticipated how many more objects would be coursing through Earth’s orbit. The problem is poised to only get worse as the space industry expands and humans launch more spacecraft than ever into orbit. We can build shielding that accounts for a changing environment, but not even the best models for future debris accumulation can predict everything. 

In February 2009, the Iridium 33 and Kosmos-2251 satellites collided, creating a huge swath of debris that began circulating through Earth’s orbit. The largest pieces were identified and tracked, but debris that was less than 10 centimeters in length—pieces that still pose a threat to spacecraft hull—was allowed to zip through space undetected. The accident illustrated that unanticipated events could greatly exacerbate the problem of protecting spacecraft. “Any big accident could drastically change the situation and increase the risks for any number of other spacecraft in orbit,” says Telichev. “What we develop today might not be good enough by tomorrow.”

Shielding can help prevent leaks from coming up, but “this problem is unavoidable,” says Telichev. That means it will be even more critical to be able to isolate and repair leaks as they come up.

For Telichev and others, the solution really comes down to a better management of space itself, and reducing the accumulation of debris large and small. “If the world’s government don’t pay attention to the problem now,” he says, “it’s not going to go away on its own.”

Cassidy and his crewmates were still looking for the leak as of Wednesday morning. A Northrop Grumman Cygnus resupply mission is scheduled to launch Thursday night, followed by a SpaceX Crew Dragon mission on October 14 to bring another two cosmonauts and one astronaut to the ISS. Between unpacking the new supplies and scientific experiments, and welcoming the new crew, there won’t be a whole lot of time to find the leak over the next few weeks, so the pressure is, figuratively, on.

Read more

Ask Stefan Jockusch what a factory might look like in 10 or 20 years, and the answer might leave you at a crossroads between fascination and bewilderment. Jockusch is vice president for strategy at Siemens Digital Industries Software, which develops applications that simulate the conception, design, and manufacture of products like cell phones or smart watches. His vision of a smart factory is abuzz with “independent, moving” robots. But they don’t stop at making one or three or five things. No—this factory is “self-organizing.”

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

“Depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product,” Jockusch says. “It will self-organize itself to do something different.”

Behind this factory of the future is artificial intelligence (AI), Jockusch says in this episode of Business Lab. But AI starts much, much smaller, with the chip. Take automaking. The chips that power the various applications in cars today—and the driverless vehicles of tomorrow—are embedded with AI, which support real-time decision-making. They’re highly specialized, built with specific tasks in mind. The people who design chips then need to see the big picture.

“You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving. You have to have an idea of how many images that chip has to process or how many things are moving on those images,” Jockusch says. “You have to understand a lot about what will happen in the end.”

This complex way of building, delivering, and connecting products and systems is what Siemens describes as “chip to city”—the idea that future population centers will be powered by the transmission of data. Factories and cities that monitor and manage themselves, Jockusch says, rely on “continuous improvement”: AI executes an action, learns from the results, and then tweaks its subsequent actions to achieve a better result. Today, most AI is helping humans make better decisions.

“We have one application where the program watches the user and tries to predict the command the user is going to use next,” Jockusch says. “The longer the application can watch the user, the more accurate it will be.”

Applying AI to manufacturing can result in cost savings and big gains in efficiency. Jockusch gives an example from a Siemens factory of printed circuit boards, which are used in most electronic products. The milling machine used there has a tendency to “goo up over time—to get dirty.” The challenge is to determine when the machine has to be cleaned so it doesn’t fail in the middle of a shift.

“We are using actually an AI application on an edge device that’s sitting right in the factory to monitor that machine and make a fairly accurate prediction when it’s time to do the maintenance,” Jockusch says.

The full impact of AI on business—and the full range of opportunities the technology can uncover—is still unknown.

“There’s a lot of work happening to understand these implications better,” Jockusch says. “We are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.”

Business Lab is hosted by Laurel Ruma, director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in partnership with Siemens Digital Industries Software.

Show notes and links

“Siemens helps Vietnamese car manufacturer produce first vehicles,” Automation.com, September 6, 2019

“Chip to city: the future of mobility,” by Stefan Jockusch, The International Society for Optics and Photonics Digital Library, September 26, 2019

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace. Our topic today is artificial intelligence and physical applications. AI can run on a chip, on an edge device, in a car, in a factory, and ultimately, AI will run a city with real-time decision-making, thanks to fast processing, small devices, and continuous learning. Two words for you: smart factory.

My guest is Dr. Stefan Jockusch, who is vice president for strategy for Siemens Digital Industries Software. He is responsible for strategic business planning and market intelligence, and Stefan also coordinates projects across business segments and with Siemens Digital Leadership. This episode of Business Lab is produced in association with Siemens Digital Industries. Welcome, Stefan.

Stefan Jockusch: Hi. Thanks for having me.

Laurel: So, if we could start off a bit, could you tell us about Siemens Digital Industries? What exactly do you do?

Stefan: Yeah, in the Siemens Digital Industries, we are the technical software business. So we develop software that supports the whole process from the initial idea of a product like a new cell phone or smartwatch, to the design, and then the manufactured product. So that includes the mechanical design, the software that runs on it, and even the chips that power that device. So with our software, you can put all this into the digital world. And we like to talk about what you get out of that, as the digital twin. So you have a digital twin of everything, the behavior, the physics, the simulation, the software, and the chip. And you can of course use that digital twin to basically do any decision or try out how the product works, how it behaves, before you even have to build it. That’s in a nutshell what we do.

Laurel: So, staying on that idea of the digital twin, how do we explain the idea of chip to city? How can manufacturers actually simulate a chip, its functions, and then the product, say, as a car, as well as the environment surrounding that car?

Stefan: Yeah. Behind that idea is really the thought that in the future, and today already we have to build products, enabling the people who work on that to see the whole, rather than just a little piece. So this is why we make it as big as to say from chip to city, which really means, when you design a chip that runs in a vehicle of today and more so in the future, you have to take a lot of things into account while you are designing that chip. You have to have an idea if the chip, for example, controls the interpretation of things that the cameras see for autonomous driving, you have to have an idea how many images that chip has to process or how many things are moving on those images and obvious pedestrians, what recognition do you have to do? You have to understand a lot about what will happen in the end. So the idea is to enable a designer at the chip level to understand the actual behavior of a product.

And what’s happening today, especially is that we don’t develop cars anymore just with a car in mind, we more and more are connecting vehicles to the environment, to each other. And one of the big purposes, as we all know, that is of course, to improve the contamination in cities and also the traffic in cities, so really to make these metropolitan areas more livable. So that’s also something that we have to take into account in this whole process chain, if we want to see the whole as a designer. So this is the background of this whole idea, chip to city. And again, the way it should look like for a designer, if you think about, I’m designing this vision module in a car, and I want to understand how powerful it has to be. I have a way to immerse myself into a simulation, a very accurate one, and I can see what data my vehicle will see, what’s in them, how many sensor inputs I get from other sources, and what I have to do. I can really play through all of that.

Laurel: I really like that framing of being able to see the whole, not just the piece of this incredibly complex way of thinking, building, delivering. So to get back down to that piece level, how does AI play a role at the chip level?

Stefan: AI is a lot about supporting or even making the right decision in real time. And that’s I think where AI and the chip level become so important together, because we all know that a lot of smart things can be done if you have a big computer sitting somewhere in a data center. But AI and the chip level is really very targeted at these applications that need real-time performance and a performance that doesn’t have time to communicate a lot. And today it’s really evolving to that the chips that do AI applications are now designed already in a very specialized way, whether they have to do a lot of compute power or whether they have to conserve energy as best as they can, so be very low power consumption or whether they need more memory. So yeah, it’s becoming more and more commonplace thing that we see AI embedded in tiny little chips, and then probably in future cars, we will have a dozen or so semiconductor-level AI applications for different things.

Laurel: Well, that brings up a good point because it’s the humans who are needing to make these decisions in real time with these tiny chips on devices. So how does the complexity of something like continuous learning with AI, not just help the AI become smarter but also affect the output of data, which then eventually, even though very quickly, allows the human to make better decisions in real time?

Stefan: I would say most applications of AI today are rather designed to help a human make a good decision rather than making the decision. I don’t think we trust it quite that much yet. So as an example, in our own software, like so many makers of software, we are starting to use AI to make it easier and faster to use. So for example, you have these very complex design applications that can do a lot of things, and of course they have hundreds of menus. So we have one application where the program watches the user and tries to predict the command the user is going to use next. So just to offer it and just say, “Aren’t you about to do this?” And of course, you talked about the continuous improvement, continuous learning—the longer the application can watch the user, the more accurate it will be.

It’s currently already at a level of over 95%, but of course continuous learning improves it. And by the way, this is also a way to use AI not just to help a single user but to start encoding a knowledge, an experience, a varied experience of good users and make it available to other users. If a very experienced engineer does that and uses AI and you basically take those learned lessons from that engineer and give it to someone less experienced who has to do a similar thing, that experience will help the new user as well, the novice user.

Laurel: That’s really compelling because you’re right—you’re building a knowledge database, an actual database of data. And then also this all helps the AI eventually, but then also really does help the human because you are trying to extend this knowledge to as many people as possible. Now, when we think about that and AI at the edge, how does this change opportunities for the business, whether you’re a manufacturer or the person using the device?

Stefan: Yeah. And in general, of course, it’s a way for everyone who makes a smart product to differentiate, to create differentiation because all these, the functions enabled by AI of course are smart, and they give some differentiation. But the example I just mentioned where you can predict what a user will do, that of course is something that many pieces of software don’t have yet. So it’s a way to differentiate. And it certainly opens lots of opportunities to create these very highly differentiated pieces of functionality, whether it’s in software or in vehicles, in any other area.

Laurel: So if we were actually to apply this perhaps to a smart factory and how people think of a manufacturing chain, first this happens, and then that happens and a car door is put on and then an engine is put in or whatever. What can we apply to that kind of traditional way of thinking of a factory and then apply this AI thinking to it?

Stefan: Well, we can start with the oldest problem a factory has had. I mean, factories have always been about producing something very efficiently and continuously and leveraging the resources. So any factory tries to be up and running whenever it’s supposed to be up and running, have no unpredicted or unplanned downtime. So AI is starting to become a great tool to do this. And I can give you a very hands-on example from a Siemens factory that does printed circuit boards. And one of the steps they have to do is milling of these circuit boards. They have a milling machine and any milling machine, especially one like that that’s highly automated and robotic, it has a tendency to goo up over time, to get dirty. And so one challenge is to have the right maintenance because you don’t want the machine to fail right in the middle of a shift and create this unplanned downtime.

So one big challenge is to figure out when this machine has to be maintained, without of course, maintaining it every day, which would be very expensive. So we are using actually an AI application on an edge device that’s sitting right in the factory, to monitor that machine and make a fairly accurate prediction when it’s time to do the maintenance and clean the machine so it doesn’t fail in the next shift. So this is just one example, and I believe there is hundreds of potential applications that may not be totally worked out yet in this area of really making sure that factories produce consistent high quality, that there’s no unplanned downtime of the machines. There’s of course, a lot of use already of AI in visual quality inspections. So there’s tons and tons of applications on the factory floor.

Laurel: And this has massive implications for manufacturers, because as you mentioned, it saves money, right? So is this a tough shift, do you think, for executives to think about investing in technology in a bit of a different way to then get all of those benefits?

Stefan: Yeah. It’s like with every technology, I wouldn’t think it’s a big block, there’s a lot of interest at this point and there’s many manufacturers with initiatives in that space. So I would say it’s probably going to create a significant progress in productivity, but of course, it also means investment. And I can say since it’s fairly predictable to see what the payback of this investment will be. As far as we can see, there’s a lot of positive energy there, to make this investment and to modernize factories.

Laurel: What kind of modernizations you need for the workforce in the factories when you are installing and applying, kind of retooling to have AI applications in mind?

Stefan: That’s a great question because sometimes I would say many users of artificial intelligence applications probably don’t even know they’re using one. So you basically get a box and it will tell you, is recommended to maintain this machine now. The operator probably will know what to do, but not necessarily know what technology they’re working with. But that said of course there will probably will be some, I would say, almost emerging specialties or emerging skills for engineers to really, how to use and how to optimize these AI applications that they use on the factory floor. Because as I said, we have these applications that are up and running and working today, but to get to those applications to be really useful, to be accurate enough, that of course, to this point needs a lot of expertise, at least some iteration as well. And there’s probably not too many people today who really are experienced enough with the technologies and also understand the factory environment well enough to do this.

I think this is a fairly, pretty rare skill these days and to make this a more commonplace application of course we will have to create more of these experts who are really good at making AI factory-floor-ready and getting it to the right maturity.

Laurel: That seems to be an excellent opportunity, right? For people to learn new skills. This is not an example of AI taking away jobs and that more negative connotations that you get when you talk about AI and business. In practice, if we combine all of this and talk about VinFast, the Vietnamese car manufacturer that wanted to do things quite a bit differently than traditional car manufacturing. First, they built a factory, but then they applied that kind of overarching thinking of chip to factory and then eventually to city. So coming back full circle, why is this thinking unique, especially for a car manufacturer and what kind of opportunities and challenges do they have?

Stefan: Yeah. VinFast is an interesting example because when they got into making vehicles, they basically started on a green field. And that is probably the biggest difference between VinFast and the vast majority of the major automakers. That all of them are a hundred or more years old and have of course a lot of history, which then translates into having existing factories or having a lot of things that were really built before the age of digitalization. So VinFast started from a greenfield, and that of course is a big challenge, it makes it very difficult. But the advantage was that they really have the opportunity to start off with a full digitalized approach, that they were able to use software. Because they were basically constructing everything, and they could really start off with this fairly complete digital twin of not only their product but also they designed the whole factory on a computer before even starting to build it. And then they build it in record time.

So that’s probably the big, unique aspect that they have this opportunity to be completely digital. And once you are at that state, once you can already say my whole design, of course, my software running on the vehicle, but also my whole factory, my whole factory automation. I already have this in a fully digital way and I can run through simulations and scenarios. That also means you have a great starting point to use these AI technologies to optimize your factory or to help the workers with the additional optimizations and so on.

Laurel: Do you think it’s impossible to be one of those hundred-year-old manufacturers and slowly adopt these kinds of technologies? You probably don’t have to have a greenfield environment, it just makes everything easy or I should say easier, right?

Stefan: Yeah. All of them, I mean the auto industry has traditionally been one of the one that invested most in productivity and in digitalization. So all of them are on that path. Again, they don’t have this very unique situation that you, or rarely have this unique situation that you can really start from a blank slate. But a lot of the software technology of course, also is adapted to that scenario. Where for example, you have an existing factory, so it doesn’t help you a lot to design a factory on the computer if you already have one. So you use these technologies that allow you to go through the factory and do a 3D scan. So you know exactly how the factory looks like from the inside without having it designed in a computer, because you essentially produce that information after the fact. So that’s definitely what the established or the traditional automakers do a lot and where they’re also basically bringing the digitalization even into the existing environment.

Laurel: We’re really discussing the implications when companies can use simulations and scenarios to apply AI. So when you can, whether or not it’s greenfield or you’re adopting it for your own factory, what happens to the business? What are the outcomes? Where are some of the opportunities that are possible when AI can be applied to the actual chip, to the car, and then eventually to the city, to a larger ecosystem?

Stefan: Yeah. When we really think about the impact to the business, I frankly think we are at the beginning of understanding and calculating what the value of faster and more accurate decisions really is, which are enabled by AI. I don’t think we have a very complete understanding at this point, and it’s fairly obvious to everybody that digitalizing like the design process and the manufacturing process. It not only saves R&D effort and R&D money, but it also helps optimize the supply chain inventories, the manufacturing costs, and the total cost of the new product. And that is really where different aspects of the business come together. And I would frankly say, we start to understand the immediate effects, we start to understand if I have an AI-driven quality check that will reduce my waste, so I can understand that kind of business value.

But there is a whole dimension of business value of using this optimization that really translates to the whole enterprise. And I would say there’s a lot of work happening to understand these implications better. But I would say at this point, we are just at the starting point of doing this, of really understanding what can optimization of a process do for the enterprise as a whole.

Laurel: So optimization, continuous learning, continuous improvement, this makes me think of, and cars, of course, The Toyota Way, which is that seminal book that was written in 2003, which is amazing, because it’s still current today. But with lean manufacturing, is it possible for AI to continuously improve that at the chip level, at the factory level, at the city to help these businesses make better decisions?

Stefan: Yeah. In my view, The Toyota Way, again, the book published in the early 2000s, with continuous improvement, in my view, continuous improvement of course always can do a lot, but there’s a little bit of recognition in the last, I would say five to 10 years, somewhere like that, that continuous improvement might have hit the wall of what’s possible. So there is a lot of thought since then of what is really the next paradigm for manufacturing. When you stop thinking about evolution and optimization and you think about more revolution. And one of the concepts that have been developed here is called industry 4.0, which is really the thought about turning upside down the idea of how manufacturing or how the value chain can work. And really think about what if I get two factories that are completely self-organizing, which is kind of a revolutionary step. Because today, mostly a factory is set up around a certain idea of what products it makes and when you have lines and conveyors and stuff like that, and they’re all bolted to the floor. So it’s fairly static, the original idea of a factory. And you can optimize it in an evolutionary way for a long time, but you’d never break through that threshold.

So the newest thought or the other concepts that are being thought about are, what if my factory consists of independent, moving robots, and the robots can do different tasks. They can transport material, or they can then switch over to holding a robot arm or a gripper. And depending on what product I throw at this factory, it will completely reshuffle itself and work differently when I come in with a very different product and it will self-organize itself to do something different. So those are some of the paradigms that are being thought of today, which of course, can only become a reality with heavy use of AI technologies in them. And we think they are really going to revolutionize at least what some kinds of manufacturing will do. Today we talk a lot about lot size one, and that customers want more options and variations in a product. So the factories that are able to do this, to really produce very customized products, very efficiently, they have to look much different.

So in many ways, I think there’s a lot of validity to the approach of continuous improvement. But I think we right now live in a time where we think more about a revolution of the manufacturing paradigm.

Laurel: That’s amazing. The next paradigm is revolution. Stefan, thank you so much for joining us today in what has been an absolutely fantastic conversation on the Business Lab.

Stefan: Absolutely. My pleasure. Thank you.

Laurel: That was Stefan Jockusch, vice president of strategy for Siemens Digital Industry Software, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River. That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in prints, on the web, and at events online and around the world. For more information about us and the show, please check out our website at technologyreview.com. The show is available wherever you get your podcasts. If you enjoyed this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

Read more

Should Twitter censor lies tweeted by the US president? Should YouTube take down covid-19 misinformation? Should Facebook do more against hate speech? Such questions, which crop up daily in media coverage, can make it seem as if the main technologically driven risk to democracies is the curation of content by social-media companies. Yet these controversies are merely symptoms of a larger threat: the depth of privatized power over the digital world.

Every democratic country in the world faces the same challenge, but none can defuse it alone. We need a global democratic alliance to set norms, rules, and guidelines for technology companies and to agree on protocols for cross-border digital activities including election interference, cyberwar, and online trade. Citizens are better represented when a coalition of their governments—rather than a handful of corporate executives—define the terms of governance, and when checks, balances, and oversight mechanisms are in place.   

There’s a long list of ways in which technology companies govern our lives without much regulation. In areas from building critical infrastructure and defending it—or even producing offensive cyber tools—to designing artificial intelligence systems and government databases, decisions made in the interests of business set norms and standards for billions of people.

Increasingly, companies take over state roles or develop products that affect fundamental rights. For example, facial recognition systems that were never properly regulated before being developed and deployed are now so widely used as to rob people of their privacy. Similarly, companies systematically scoop up private data, often without consent—an industry norm that regulators have been slow to address.

Since technologies evolve faster than laws, discrepancies between private agency and public oversight are growing. Take, for example, “smart city” companies, which promise that local governments will be able to ease congestion by monitoring cars in real time and adjusting the timing of traffic lights. Unlike, say, a road built by a construction company, this digital infrastructure is not necessarily in the public domain. The companies that build it acquire insights and value that may not flow back to the public.

This disparity between the public and private sectors is spiraling out of control. There’s an information gap, a talent gap, and a compute gap. Together, these add up to a power and accountability gap. An entire layer of control of our daily lives thus exists without democratic legitimacy and with little oversight.

Why should we care? Because decisions that companies make about digital systems may not adhere to essential democratic principles such as freedom of choice, fair competition, nondiscrimination, justice, and accountability. Unintended consequences of technological processes, wrong decisions, or business-driven designs could create serious risks for public safety and national security. And power that is not subject to systematic checks and balances is at odds with the founding principles of most democracies.

Today, technology regulation is often characterized as a three-way contest between the state-led systems in China and Russia, the market-driven one in the United States, and a values-based vision in Europe. The reality, however, is that there are only two dominant systems of technology governance: the privatized one described above, which applies in the entire democratic world, and an authoritarian one.

To bring globe-spanning technology firms to heel, we need something new: a global alliance that puts democracy first.

The laissez-faire approach of democratic governments, and their reluctance to rein in private companies at home, also plays out on the international stage. While democratic governments have largely allowed companies to govern, authoritarian governments have taken to shaping norms through international fora. This unfortunate shift coincides with a trend of democratic decline worldwide, as large democracies like India, Turkey, and Brazil have become more authoritarian. Without deliberate and immediate efforts by democratic governments to win back agency, corporate and authoritarian governance models will erode democracy everywhere.

Does that mean democratic governments should build their own social-media platforms, data centers, and mobile phones instead? No. But they do need to urgently reclaim their role in creating rules and restrictions that uphold democracy’s core principles in the technology sphere. Up to now, these governments have slowly begun to do that with laws at the national level or, in Europe’s case, at the regional level. But to bring globe-spanning technology firms to heel, we need something new: a global alliance that puts democracy first.

Teaming up

Global institutions born in the aftermath of World War II, like the United Nations, the World Trade Organization, and the North Atlantic Treaty Organization, created a rules-based international order. But they fail to take the digital world fully into account in their mandates and agendas, even if many are finally starting to focus on digital cooperation, e-commerce, and cybersecurity. And while digital trade (which requires its own regulations, such as rules for e-commerce and criteria for the exchange of data) is of growing importance, WTO members have not agreed on global rules covering services for smart manufacturing, digital supply chains, and other digitally enabled transactions.

What we need now, therefore, is a large democratic coalition that can offer a meaningful alternative to the two existing models of technology governance, the privatized and the authoritarian. It should be a global coalition, welcoming countries that meet democratic criteria.

The Community of Democracies, a coalition of states that was created in 2000 to advance democracy but never had much impact, could be revamped and upgraded to include an ambitious mandate for the governance of technology. Alternatively, a “D7” or “D20” could be established—a coalition akin to the G7 or G20 but composed of the largest democracies in the world.

Such a group would agree on regulations and standards for technology in line with core democratic principles. Then each member country would implement them in its own way, much as EU member states do today with EU directives.

What problems would such a coalition resolve? The coalition might, for instance, adopt a shared definition of freedom of expression for social-media companies to follow. Perhaps that definition would be similar to the broadly shared European approach, where expression is free but there are clear exceptions for hate speech and incitements to violence.

Or the coalition might limit the practice of microtargeting political ads on social media: it could, for example, forbid companies from allowing advertisers to tailor and target ads on the basis of someone’s religion, ethnicity, sexual orientation, or collected personal data. At the very least, the coalition could advocate for more transparency about microtargeting to create more informed debate about which data collection practices ought to be off limits.

The democratic coalition could also adopt standards and methods of oversight for the digital operations of elections and campaigns. This might mean agreeing on security requirements for voting machines, plus anonymity standards, stress tests, and verification methods such as requiring a paper backup for every vote. And the entire coalition could agree to impose sanctions on any country or non-state actor that interferes with an election or referendum in any of the member states.

Another task the coalition might take on is developing trade rules for the digital economy. For example, members could agree never to demand that companies hand over the source code of software to state authorities, as China does. They could also agree to adopt common data protection rules for cross-border transactions. Such moves would allow a sort of digital free-trade zone to develop across like-minded nations.

China already has something similar to this in the form of eWTP, a trade platform that allows global tariff-free trade for transactions under a million dollars. But eWTP, which was started by e-commerce giant Alibaba, is run by private-sector companies based in China. The Chinese government is known to have access to data through private companies. Without a public, rules-based alternative, eWTP could become the de facto global platform for digital trade, with no democratic mandate or oversight.

Another matter this coalition could address would be the security of supply chains for devices like phones and laptops. Many countries have banned smartphones and telecom equipment from Huawei because of fears that the company’s technology may have built-in vulnerabilities or backdoors that the Chinese government could exploit. Proactively developing joint standards to protect the integrity of supply chains and products would create a level playing field between the coalition’s members and build trust in companies that agree to abide by them.

The next area that may be worthy of the coalition’s attention is cyberwar and hybrid conflict (where digital and physical aggression are combined). Over the past decade, a growing number of countries have identified hybrid conflict as a national security threat. Any nation with highly skilled cyber operations can wreak havoc on countries that fail to invest in defenses against them. Meanwhile, cyberattacks by non-state actors have shifted the balance of power between states.

Right now, though, there are no international criteria that define when a cyberattack counts as an act of war. This encourages bad actors to strike with many small blows. In addition to their immediate economic or (geo)political effect, such attacks erode trust that justice will be served.

A democratic coalition could work on closing this accountability gap and initiate an independent tribunal to investigate such attacks, perhaps similar to the Hague’s Permanent Court of Arbitration, which rules on international disputes. Leaders of the democratic alliance could then decide, on the basis of the tribunal’s rulings, whether economic and political sanctions should follow.

These are just some of the ways in which a global democratic coalition could advance rules that are sorely lacking in the digital sphere. Coalition standards could effectively become global ones if its members represent a good portion of the world’s population. The EU’s General Data Protection Regulation provides an example of how this could work. Although GDPR applies only to Europe, global technology firms must follow its rules for their European users, and this makes it harder to object as other jurisdictions adopt similar laws. Similarly, non-members of the democratic coalition could end up following many of its rules in order to enjoy the benefits.

If democratic governments do not assume more power in technology governance as authoritarian governments grow more powerful, the digital world—which is a part of our everyday lives—will not be democratic. Without a system of clear legitimacy for those who govern—without checks, balances, and mechanisms for independent oversight—it’s impossible to hold technology companies accountable. Only by building a global coalition for technology governance can democratic governments once again put democracy first.

Marietje Schaake is the international policy director at Stanford University’s Cyber Policy Center and an international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party.

Read more

Wondering how to get more out of LinkedIn? Are you using LinkedIn effectively? In this article, you’ll discover nine LinkedIn ideas to improve your networking and marketing. #1: Maintain Top-of-Mind Awareness With 1st-Level LinkedIn Connections Every update you see in the LinkedIn feed is an opportunity to understand what the people and businesses you follow […]

The post 9 Ways to Improve Your Marketing on LinkedIn appeared first on Social Media Examiner | Social Media Marketing.

Read more

Wondering if your Facebook Live videos are working? Need a deeper understanding of your live video metrics? In this article, you’ll find a deep dive into Facebook Live video Insights in the Creator Studio dashboard. Discover which metrics actually help you improve your Facebook lives and find out why most marketers get their data and […]

The post How to Analyze Facebook Live Video: A Marketing Tutorial appeared first on Social Media Examiner | Social Media Marketing.

Read more

Want more email subscribers? Have you considered paying Facebook to help grow your list? In this article, you’ll discover how to improve the chances that you acquire and keep email subscribers. Why Combine Facebook Ads With an Email Nurture Sequence? No matter what type of online business you run, your email list is the most […]

The post How to Grow Your Email List With Facebook Ads appeared first on Social Media Examiner | Social Media Marketing.

Read more

Want to do more with YouTube? Wondering how to compete with established YouTube channels? To explore how to build a brand on YouTube, I interview Salma Jafri on the Social Media Marketing Podcast. Salma is a video strategist and YouTube coach who specializes in helping other coaches, solopreneurs, and consultants develop their brand with video […]

The post How to Build Your Brand on YouTube appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,468 2,469 2,470 2,471 2,472 2,474