Ice Lounge Media

Ice Lounge Media

Construction sites are vast jigsaws of people and parts that must be pieced together just so at just the right times. As projects get larger, mistakes and delays get more expensive. The consultancy Mckinsey estimates that on-site mismanagement costs the construction industry $1.6 trillion a year. But typically you might only have five managers overseeing construction of a building with 1,500 rooms, says Roy Danon, founder and CEO of British-Israeli startup Buildots: “There’s no way a human can control that amount of detail.”

Danon thinks that AI can help. Buildots is developing an image recognition system that monitors every detail of an ongoing construction project and flags up delays or errors automatically. It is already being used by two of the biggest building firms in Europe, including UK construction giant Wates in a handful of large residential builds. Construction is essentially a kind of manufacturing, says Danon. If high-tech factories now use AI to manage their processes, why not construction sites?

AI is starting to change various aspects of construction, from design to self-driving diggers. But Buildots is the first to use AI as a kind of overall site inspector. 

The system uses a GoPro camera mounted on top of a hard hat. When managers tour a site once or twice a week, the camera on their head captures video footage of the whole project and uploads it to image recognition software, which compares the status of many thousands of objects on site—such as electrical sockets and bathroom fittings—with a digital replica of the building.  

The AI also uses the video feed to track where the camera is in the building to within a few centimeters so that it can identify the exact location of the objects in each frame. The system can track the status of around 150,000 objects several times a week, says Danon. For each object the AI can tell which of three or four states it is in, from not yet begun to fully installed.

Site inspections are slow and tedious, says Sophie Morris at Buildots, a civil engineer who used to work in construction before joining the company. The Buildots AI gets rid of many repetitive tasks and lets people focus on important decisions. “That’s the job people want to be doing—not having to go and check if the walls have been painted or if someone’s drilled too many holes in the ceiling,” she says.

Another plus is the way the tech works in the background. “It captures data without the need to walk the site with spreadsheets or schedules,” says Glen Roberts, operations director at Wates. He says his firm is now planning to roll out the Buildots system at other sites.

Comparing the complete status of a project with its digital plan several times a week has also made a big difference during the covid-19 pandemic. When construction sites were shut down to all but the most essential on-site workers, managers on several Buildots projects were able to keep tabs on progress remotely.

But AI won’t be replacing those essential workers anytime soon. Buildings are still built by people. “At the end of the day, this is a very labor-driven industry, and that won’t change,” says Morris.

Read more

Machine learning typically requires tons of examples. To get an AI model to recognize a horse, you need to show it thousands of images of horses. This is what makes the technology computationally expensive—and very different from human learning. A child often needs to see just a few examples of an object, or even only one, before being able to recognize it for life.

In fact, children sometimes don’t need any examples to identify something. Shown photos of a horse and a rhino, and told a unicorn is something in between, they can recognize the mythical creature in a picture book the first time they see it.

Rhinocorn, a cross between a rhino and unicorn
Hmm…ok, not quite.
MS TECH / PIXABAY

Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call “less than one”-shot, or LO-shot, learning. In other words, an AI model should be able to accurately recognize more objects than the number of examples it was trained on. That could be a big deal for a field that has grown increasingly expensive and inaccessible as the data sets used become ever larger.

How “less than one”-shot learning works

The researchers first demonstrated this idea while experimenting with the popular computer-vision data set known as MNIST. MNIST, which contains 60,000 training images of handwritten digits from 0 to 9, is often used to test out new ideas in the field.

In a previous paper, MIT researchers had introduced a technique to “distill” giant data sets into tiny ones, and as a proof of concept, they had compressed MNIST down to only 10 images. The images weren’t selected from the original data set but carefully engineered and optimized to contain an equivalent amount of information to the full set. As a result, when trained exclusively on the 10 images, an AI model could achieve nearly the same accuracy as one trained on all MNIST’s images.

Handwritten digits between 0 and 9 sampled from the MNIST dataset.
Sample images from the MNIST dataset.
WIKIMEDIA
Ten images that look nonsensical but are the distilled versions of the MNIST dataset.
The 10 images “distilled” from MNIST that can train an AI model to achieve 94% recognition accuracy on handwritten digits.
TONGZHOU WANG ET AL.

The Waterloo researchers wanted to take the distillation process further. If it’s possible to shrink 60,000 images down to 10, why not squeeze them into five? The trick, they realized, was to create images that blend multiple digits together and then feed them into an AI model with hybrid, or “soft,” labels. (Think back to a horse and rhino having partial features of a unicorn.)

“If you think about the digit 3, it kind of also looks like the digit 8 but nothing like the digit 7,” says Ilia Sucholutsky, a PhD student at Waterloo and lead author of the paper. “Soft labels try to capture these shared features. So instead of telling the machine, ‘This image is the digit 3,’ we say, ‘This image is 60% the digit 3, 30% the digit 8, and 10% the digit 0.’”

The limits of LO-shot learning

Once the researchers successfully used soft labels to achieve LO-shot learning on MNIST, they began to wonder how far this idea could actually go. Is there a limit to the number of categories you can teach an AI model to identify from a tiny number of examples?

Surprisingly, the answer seems to be no. With carefully engineered soft labels, even two examples could theoretically encode any number of categories. “With two points, you can separate a thousand classes or 10,000 classes or a million classes,” Sucholutsky says.

Apples and oranges plotted on a chart by weight and color.
Plotting apples (green and red dots) and oranges (orange dots) by weight and color.
ADAPTED FROM JASON MAYES’ “MACHINE LEARNING 101” SLIDE DECK

This is what the researchers demonstrate in their latest paper, through a purely mathematical exploration. They play out the concept with one of the simplest machine-learning algorithms, known as k-nearest neighbors (kNN), which classifies objects using a graphical approach.

To understand how kNN works, take the task of classifying fruits as an example. If you want to train a kNN model to understand the difference between apples and oranges, you must first select the features you want to use to represent each fruit. Perhaps you choose color and weight, so for each apple and orange, you feed the kNN one data point with the fruit’s color as its x-value and weight as its y-value. The kNN algorithm then plots all the data points on a 2D chart and draws a boundary line straight down the middle between the apples and the oranges. At this point the plot is split neatly into two classes, and the algorithm can now decide whether new data points represent one or the other based on which side of the line they fall on.

To explore LO-shot learning with the kNN algorithm, the researchers created a series of tiny synthetic data sets and carefully engineered their soft labels. Then they let the kNN plot the boundary lines it was seeing and found it successfully split the plot up into more classes than data points. The researchers also had a high degree of control over where the boundary lines fell. Using various tweaks to the soft labels, they could get the kNN algorithm to draw precise patterns in the shape of flowers.

Various charts showing the boundary lines being plotted out by a kNN algorithm. Each chart has more and more boundary lines, all encoded in tiny datasets.
The researchers used soft-labelled examples to train a kNN algorithm to encode increasingly complex boundary lines, splitting the chart into far more classes than data points. Each of the colored areas on the plots represent a different class, while the pie charts to the side of each plot show the soft label distribution for every data point.
ILIA SUCHOLUTSKY ET AL.

Of course, these theoretical explorations have some limits. While the idea of LO-shot learning should transfer to more complex algorithms, the task of engineering the soft-labeled examples grows substantially harder. The kNN algorithm is interpretable and visual, making it possible for humans to design the labels; neural networks are complicated and impenetrable, meaning the same may not be true. Data distillation, which works for designing soft-labeled examples for neural networks, also has a major disadvantage: it requires you to start with a giant data set in order to shrink it down to something more efficient.

Sucholutsky says he’s now working on figuring out other ways to engineer these tiny synthetic data sets—whether that means designing them by hand or with another algorithm. Despite these additional research challenges, however, the paper provides the theoretical foundations for LO-shot learning. “The conclusion is depending on what kind of data sets you have, you can probably get massive efficiency gains,” he says.

This is what most interests Tongzhou Wang, an MIT PhD student who led the earlier research on data distillation. “The paper builds upon a really novel and important goal: learning powerful models from small data sets,” he says of Sucholutsky’s contribution.

Ryan Khurana, a researcher at the Montreal AI Ethics Institute, echoes this sentiment: “Most significantly, ‘less than one’-shot learning would radically reduce data requirements for getting a functioning model built.” This could make AI more accessible to companies and industries that have thus far been hampered by the field’s data requirements. It could also improve data privacy, because less information would have to be extracted from individuals to train useful models.

Sucholutsky emphasizes that the research is still early, but he is excited. Every time he begins presenting his paper to fellow researchers, their initial reaction is to say that the idea is impossible, he says. When they suddenly realize it isn’t, it opens up a whole new world.

Read more

After months of experts’ expecting another hack-and-leak operation in the lead-up to Election Day, a strange story appeared in the New York Post on Wednesday morning. It claimed to maybe reveal emails to Joe Biden’s son Hunter, which could possibly indicate that Hunter introduced his father, then vice president, to a Ukrainian energy executive. 

Despite a headline claiming it was a “smoking gun,” the story is marked by questionable sources, unverified facts, and very little actual news. The whole thing echoes a disinformation campaign that US intelligence says is being carried out by the Russian government against Joe Biden, and as disinformation expert Thomas Rid smartly pointed out, the exact tactics involved closely resemble disinformation tactics used in the past. 

Vice’s Jason Koebler has an excellent though mind-numbing review of the story if you wish to really dive into the details. One thing worth noting up front is that Senate Republicans found no evidence of wrongdoing by Biden regarding his son’s overseas work. 

Social-media martyr

But the story is no longer the story. Instead, allegations of censorship by social-media platforms have become the talking point.

Soon after the Post published, Twitter and Facebook took action over what they saw as a piece of deliberate disinformation. Facebook slowed sharing of the story, citing its standard fact-checking procedure. Twitter blocked people from sharing the link entirely, referring to its policies on hacked material and posting personal information. Their actions meant that the relatively minor splash made by the initial article was followed by a bigger wave of outrage.

But there is precedent for the Silicon Valley companies that suggests it is not political censorship, as some are claiming. In June, Twitter banned links to “Blueleaks,” a trove of records leaked from 200 American police departments (the social network also banned the group that published the records). And Facebook has established a network of fact-checking that can add warning labels to stories and push down content with poor ratings to make it less visible. They’ve used that tactic plenty—for instance, to limit coronavirus misinformation as the pandemic has gone on.

Twitter and Facebook have been preparing for this moment for a long time—the obvious comparison is the propaganda campaign around hacked Democratic emails in the 2016 election, which were published to distract from Donald Trump’s comments on sexually assaulting women.

But that doesn’t mean they could deal with it easily.

“Not a lot of options”

“I don’t think they made the right call and I don’t think they made the wrong call,” says Bret Schafer, a researcher on media and digital disinformation at the Alliance for Securing Democracy. “There were just not a lot of good options here for them. If they let it run wild and let their platforms serve as accelerants like in 2016 and the media breathlessly covered it without analysis, they would have been hammered. If they did what they did, we’ve seen the response and it’s turned into an issue of censorship and political bias.”

Now conservative politicians are focused much more on the social-media company’s actions. Senator Josh Hawley just subpoenaed Twitter cofounder Jack Dorsey to appear before Congress, setting up the possibility for contentious hearings on the eve of the election.

“It was probably a bigger win for them to have Facebook and Twitter try to throttle the spread of the story,” Schafer told me. “Because it’s changed the conversation broadly to one of ‘censorship’ and ‘political bias’ as a platform as opposed to—well, really there wasn’t much in these leaks that was revelatory.”

What makes this fundamentally different from 2016 is that the primary source is an American news outlet. Previous leaks from anonymous accounts (see: Guccifer 2.0, the self-proclaimed DNC hacker and actual Russian intelligence officer) would be relatively easy to take down today. But when an American media outlet with Republican support publishes the story, it becomes a minor case of martyrdom.

And so this incident fits squarely into what we’ve seen for the rest of the 2020 presidential campaign. Domestic disinformation is real, it’s happening now at scale, and it’s a much more complicated issue to fix legally, morally, and politically than foreign influence campaigns. This problem echoes our previous reporting on Trump’s disinformation campaign on voter fraud that took advantage of domestic media.

If this was a no-win situation for the tech companies, Schafer argues it was a win-win for those pushing the story. But there’s no guarantee the playbook will work quite the same way it did in 2016. The actual impact will depend in large part on how traditional media and social media cope with the storm in the coming days and weeks, as well as how the American public reacts. And they have so much else on their plate to contend with—including a new wave in the pandemic, Supreme Court hearings, a difficult economy—that it could just become a footnote.

This is an excerpt from The Outcome, our daily email on election integrity and security. Click here to sign up for regular updates.

Read more

The news: In a 90-minute virtual US congressional hearing hosted by the House Intelligence Committee on Thursday, representatives took stock of the state of misinformation in America and sought advice from some of the leading experts in the field. What they heard were urgent, alarming warnings about the state of truth, political fragmentation, and the spread of conspiracy theories, specifically QAnon. 

“In many respects it looks like we have taken one step forward and two steps back.”

—Congressman Adam Schiff

Later that day during a televised town hall meeting, President Trump said he knew “nothing” about QAnon, before saying that he agreed with one of its central beliefs. 

Who was there: The committee, headed by Democrat Adam Schiff, heard from four disinformation experts: Joan Donovan (a regular contributor to MIT Technology Review), Nina Jankowicz, Cindy Otis, and Melanie Smith. They discussed the proliferation of malign actors and misinformation around the election campaign, noting that they were the result of largely domestic forces. Otis remarked that they “embrace and deploy tactics that sound much more like foreign influence operations than the tactics of good digital campaigning.”

Who wasn’t: No Republicans attended the hearing. In fact, Republican members of the House Intelligence Committee have been boycotting almost all meetings for months. Jankowicz urged the depoliticization of online disinformation, saying that “disinformation is a threat to democracy no matter what political party it benefits.” Several witnesses and Chairman Schiff pointed out that President Trump regularly creates, shares, and amplifies disinformation

What happened next: At a town hall event on NBC later that evening—replacing the second presidential debate, which was cancelled amid concerns over the president’s recent covid-19 diagnosis—Trump was asked to reject QAnon’s fake theory that senior Democrats are part of a satanic operation to abuse and traffic children. At first he claimed, “I know nothing about QAnon,” before adding, “I do know they are very much against pedophilia” and arguing that what he was being told by moderator Savannah Guthrie “doesn’t necessarily make it fact.”

More from the experts: The congressional witnesses had testified that online disinformation is now more widespread than ever and getting more sophisticated, more nuanced, and harder to monitor.  They emphasized newer trends like coordinated messages across groups and platforms, information laundering through trusted local sources, and “hidden virality,” in which the amplification of disinformation occurs in closed, unauditable spaces that make it harder to spot and remove. 

What next: A host of solutions were mentioned, though largely in passing, including rewriting Section 230, the law that protects internet platforms from responsibility for the content produced by users, as well as eliminating tax breaks for social-media companies and creating accountability mechanisms for their sites. Witnesses urged redesigns of recommendation algorithms and more user-friendly reporting features for people running into online disinformation. 

Lessons unlearned: In his closing statement, Schiff said, “In many respects it looks like we have taken one step forward and two steps back when we look at where we are now compared to where we were four years ago.” In fact, it was less than a day before Trump was courting the conspiracy crowd once again.

Read more

Want to create a loyal tribe? Wondering how to build relationships with live video? To explore how to create engaging relationships while you’re live on social media, I interview Janine Cummings on the Social Media Marketing Podcast. Janine is a live video expert who helps women entrepreneurs grow their business with live video. Her course […]

The post Live Video Engagement: How to Build Relationships Live appeared first on Social Media Examiner | Social Media Marketing.

Read more
1 2,440 2,441 2,442 2,443 2,444 2,479