Ice Lounge Media

Ice Lounge Media

Intel continues to snap up startups to build out its machine learning and AI operations. In the latest move, TechCrunch has learned that the chip giant has acquired Cnvrg.io, an Israeli company that has built and operates a platform for data scientists to build and run machine learning models, which can be used to train and track multiple models and run comparisons on them, build recommendations and more.

Intel confirmed the acquisition to us with a short note. “We can confirm that we have acquired Cnvrg,” a spokesperson said. “Cnvrg will be an independent Intel company and will continue to serve its existing and future customers.” Those customers include Lightricks, ST Unitas and Playtika.

Intel is not disclosing any financial terms of the deal, nor who from the startup will join Intel. Cnvrg, co-founded by Yochay Ettun (CEO) and Leah Forkosh Kolben, had raised $8 million from investors that include Hanaco Venture Capital and Jerusalem Venture Partners, and PitchBook estimates that it was valued at around $17 million in its last round. 

It was only a week ago that Intel made another acquisition to boost its AI business, also in the area of machine learning modeling: it picked up SigOpt, which had developed an optimization platform to run machine learning modeling and simulations.

While SigOpt is based out of the Bay Area, Cnvrg is in Israel, and joins an extensive footprint that Intel has built in the country, specifically in the area of artificial intelligence research and development, banked around its Mobileye autonomous vehicle business (which it acquired for more than $15 billion in 2017) and its acquisition of AI chipmaker Habana (which it acquired for $2 billion at the end of 2019).

Cnvrg.io’s platform works across on-premise, cloud and hybrid environments and it comes in paid and free tiers (we covered the launch of the free service, branded Core, last year). It competes with the likes of Databricks, Sagemaker and Dataiku, as well as smaller operations like H2O.ai that are built on open-source frameworks. Cnvrg’s premise is that it provides a user-friendly platform for data scientists so they can concentrate on devising algorithms and measuring how they work, not building or maintaining the platform they run on.

While Intel is not saying much about the deal, it seems that some of the same logic behind last week’s SigOpt acquisition applies here as well: Intel has been refocusing its business around next-generation chips to better compete against the likes of Nvidia and smaller players like GraphCore. So it makes sense to also provide/invest in AI tools for customers, specifically services to help with the compute loads that they will be running on those chips.

It’s notable that in our article about the Core free tier last year, Frederic noted that those using the platform in the cloud can do so with Nvidia-optimized containers that run on a Kubernetes cluster. It’s not clear if that will continue to be the case, or if containers will be optimized instead for Intel architecture, or both. Cnvrg’s other partners include Red Hat and NetApp.

Intel’s focus on the next generation of computing aims to offset declines in its legacy operations. In the last quarter, Intel reported a 3% decline in its revenues, led by a drop in its data center business. It said that it’s projecting the AI silicon market to be bigger than $25 billion by 2024, with AI silicon in the data center to be greater than $10 billion in that period.

In 2019, Intel reported some $3.8 billion in AI-driven revenue, but it hopes that tools like SigOpt’s will help drive more activity in that business, dovetailing with the push for more AI applications in a wider range of businesses.

Read more

It’s no secret that the Trump administration has pursued a variety of avenues to keep foreigners out of the U.S., including through a recent overhaul of the H-1B visa program for high-skilled foreign workers that will require employers to pay H-1B workers higher wages and narrow the types of degrees that would qualify an applicant — a move which has ready triggered numerous lawsuits.

Still, it may surprise some to learn that U.S. visas issued to students around the world have fallen as dramatically as they have this year. According to a new report in Nikkei Asia, citing U.S. State Department data, just 808 F-1 student visas were granted to applicants in mainland China between April and September’s end, which is 99% fewer than the 90,410 F-1 student visas granted during the same period last year.

The story is much the same for students of other countries: with 88% fewer F-1 visas granted to students in India, 87% fewer for students in Japan, 75% fewer for students in South Korea and 60% fewer for students from Mexico.

What’s going on? A confluence of factors, seemingly.

Coronavirus is most certainly among them, as families grow more hesitant to send their children to the U.S., which reported 93,581 new cases on Sunday alone, compared with 24 in China, 38,000 in India, 468 in Japan, 97 in South Korea and 3,762 in Mexico.

So is racism, with many Asians and Asian-Americans noting that Donald Trump’s rhetoric around the coronavirus has sharpened the racism they’ve faced throughout their lives, with terms like “kung flu” and “China virus” common in responses, per a recent survey by Washington State University researchers who say that increasing reports of racial discrimination since the start of the COVID-19 pandemic coincide with an increase in reported negative health symptoms. (The Nikkei notes that students already studying in the U.S. have been targets, too, citing a 23-year-old Chinese woman who was yelled at to leave the U.S.)

Yet an aggressive focus on Chinese espionage in Washington has played a bigger role, suggests the outlet, which speculates that the difficulty in obtaining American visas is likely to drive some Chinese students to other countries, including Canada.

Secretary of State Mike Pompeo, for example, said in remarks at the Richard Nixon Presidential Library in July that, “We opened our arms to Chinese citizens, only to see the Chinese Communist Party exploit our free and open society. China sent propagandists into our press conferences, our research centers, our high schools, our colleges and even into our PTA meetings.”

A backlash against Chinese students in particular is not a new one for the Trump administration, even while it’s been accelerated greatly in recent months. In 2018, the State Department began restricting to one year visas for Chinese graduate students studying in certain research fields, after which they need to reapply. The move rolled back a policy established during the Obama administration that allowed Chinese citizens to secure five-year student visas.

Read more

Even as the country is in the final days of a polarizing election, the cogs of VC never stop turning. On this ever-so-quiet, non-election-news Tuesday, venture firms still managed to file paperwork with the SEC indicating newly raised funds. Precursor Ventures and Insight Partners will join Hustle Fund in closing new capital.

The filings are noteworthy because they signal new capital coming into the startup world, which could look dramatically different in the coming weeks. Still, Precursor Ventures and Hustle Fund are both still fundraising, so expect them to (hopefully) add more capital in the coming months.

Precursor Ventures, led by Charles Hudson, has raised a new tranche of capital to invest in pre-seed companies. The firm first filed in March 2020 that it had plans to raise a $40 million fund, and today it appears that it has closed $29 million of that goal. Recent investments from Precursor include The Juggernaut, mmhmm and TeamPay. The fund made headlines recently because it promoted Sydney Thomas, its first hire, to principal. Hudson was unable to comment due to fundraising activity.

We also saw a filing from Insight Partners, which closed a $9.5 billion fund in April for startups and growth-stage investments, indicating that it has raised money for its first-ever Opportunity Fund. The SEC filing shows that Insight Partners has raised $413 million for the opportunity fund. Insight did not return a request for comment.

Earlier today, SEC filings also showed that Hustle Fund has raised $30 million for a second fund, surpassing its previous fund of $11.5 million. Interestingly, paperwork for this new fund was first filed in May 2019 with the intention to raise $50 million. Today’s news, thus, is its first close. While the firm is still fundraising, it’s a long gap between filing and first close. The fund was launched in 2018 by ex-500 Startup partners Eric Bahn and Elizabeth Yin to, similar to Precursor, invest in pre-seed startups. Hustle Fund invests $25,000 checks into 50 startups per year.

Yin declined to comment due to ongoing fundraising activity.

While the spree of funds on Election Day was noteworthy, it was somewhat expected. Generally speaking, funds want to get their paperwork cleared and closed before a potentially chaotic event or time of unrest. We saw closes from OpenView, Canaan, True Ventures and more, while firms including First Round and Khosla filed paperwork for new funds. Time will tell if this is a final exhale of news until January 1, or if the VC world will continue pushing droves of capital, holidays be damned.

Read more

A scourge of robocalls urging Americans to “stay safe and stay home” has gotten the attention of the FBI and the New York attorney general over concerns of voter suppression.

The brief message, which doesn’t specifically mention Election Day, has prompted New York Attorney General Letitia James to launch an investigation into the matter. James announced Tuesday that her office is actively investigating allegations that voters are receiving the robocalls.

“Voting is a cornerstone of our democracy,” James said in a statement Tuesday. “Attempts to hinder voters from exercising their right to cast their ballots are disheartening, disturbing and wrong.”

James added that such calls are illegal and will not be tolerated.

The FBI told TechCrunch that the agency is aware of reports of robocalls. The agency wouldn’t say if it is investigating the robocalls; however, a senior official at the Department of Homeland Security told reporters Tuesday that the FBI was investigating calls that seek to discourage people from voting, according to the AP.

“As a reminder, the FBI encourages the American public to verify any election and voting information they may receive through their local election officials,” the FBI said in a statement sent to TechCrunch.

The announcement from James follows subpoenas issued earlier this week by the New York AG office to investigate the source of these robocalls allegedly spreading disinformation. New York voters who receive concerning disinformation, or face issues at the polls, can contact her office’s Election Protection Hotline at 1-800-771-7755.

“Every voter must be able to exercise their fundamental right to vote without being harassed, coerced, or intimidated. Our nation has a legacy of free and fair elections, and this election will be no different,” James added. “Voters should rest assured that voting is safe and secure, and they should exercise their fundamental right to vote in confidence. We, along with state leaders across the nation, are working hard to protecting your right to vote, and anyone who tries to hinder that right will be held accountable to the fullest extent of the law.”

Last month, the U.S. Department of Justice announced that an interagency working group convened by Attorney General William P. Barr released a report to Congress on efforts to stop illegal robocalls. The report described efforts by the DOJ, including two civil actions filed in January 2020 against U.S.-based Voice over Internet Protocol (VoIP) companies, the Federal Trade Commission and the Federal Communications Commission to combat illegal robocalls. Despite those efforts, and even evidence of some declines in robocalls for a time, the presidential election and the COVID-19 pandemic has fueled a spike in calls. 

Read more

Yes, there’s a high-stakes presidential election underway, but tech news doesn’t stop completely: Chinese regulators pull the brakes on Ant Group’s IPO, Spotify adds standalone streaming support on Apple Watch and PayPal outlines its plans for 2021. This is your Daily Crunch for November 3, 2020.

The big story: China postpones Ant Group IPO

The Shanghai stock exchange has postponed Ant Group’s IPO a day after Chinese regulators held a closed-door meeting with Jack Ma and other company executives. The company has also halted plans for its public listing in Hong Kong.

Ant Group, a financial technology giant that spun out of Alibaba, was previously on track to raise $34.5 billion in the world’s largest IPO. It’s not exactly clear why the offering was called off, but Alibaba’s founder Ma recently gave a speech criticizing China’s financial regulation.

“We are sincerely sorry for any inconvenience brought to investors,” the company said in a statement. “We will properly handle follow-up matters following compliance regulations of the two exchanges.”

The tech giants

Spotify adds standalone streaming support to its Apple Watch app — The feature was spotted in testing back in September, and it arrives roughly two years after Spotify first debuted its dedicated Apple Watch app.

Twitter hides Trump tweet attacking Supreme Court’s decision on Pennsylvania ballots — In a preview of what to expect in the coming days, President Trump pushed the limits on Twitter’s election-specific policies Monday night.

PayPal details its digital wallet plans for 2021, including crypto, Honey integration and more — The company said it plans to roll out substantial changes to its mobile apps over the next year, including support for enhanced direct deposit, crypto and all of Honey’s shopping tools.

Startups, funding and venture capital

REEF Technology raises $700M from SoftBank and others to remake parking lots — REEF began its life as Miami-based ParkJockey, providing hardware, software and management services for parking lots.

Udacity raises $75M in debt, says its tech education business is profitable after enterprise pivot — The online learning company is now focused on winning over business customers.

Walmart reportedly ends contract with inventory robotics startup Bossa Nova — Walmart has reportedly pulled the plug on one of its highest-profile partnerships.

Advice and analysis from Extra Crunch

Four takeaways from fintech VC in Q3 2020 — The latest on insurtech, banking, wealth management and payments startups.

Gaming rules the entertainment industry, so why aren’t investors showing up? — Venture activity doesn’t seem to match the size of the games market.

How startups can shake up their first idea and still crush the market — Some thoughts on the ol’ startup pivot.

(Reminder: Extra Crunch is our membership program, which aims to democratize information about startups. You can sign up here.)

Everything else

Tech stocks rip higher on Election Day — The gains came long before any results that would indicate the election’s winner.

NBC News launches an iOS 14 widget that puts election results on your home screen — NBC News allows users to customize a series of widgets with information related to early voting stats, polls, current election results and more.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Read more

Millions of voters across the US received robocalls and texts encouraging them to stay at home on Election Day, in what experts believe were clear attempts at suppressing voter turnout in the closely contested 2020 political races.

Employing such tactics to spread disinformation and sow confusion amid elections isn’t new, and it’s not yet clear whether they were used more this year than in previous elections—or what effect they actually had on turnout.

However, there is some speculation that given the heavy scrutiny of election disinformation on social media in the wake of the 2016 presidential election, malicious actors may have leaned more on private forms of communication like calls, texts, and emails in this election cycle.

Among other incidents on Tuesday, officials in Michigan warned voters early in the day to ignore numerous robocalls to residents in Flint, which encouraged them to vote on Wednesday to avoid the long lines on Election Day. Meanwhile, around 10 million automated calls went out to voters across the country in the days leading up to the election advising them to “stay safe and stay home,” the Washington Post reported.

New York’s attorney general said her office was “actively investigating allegations that voters are receiving robocalls spreading disinformation.” A senior official with the Cybersecurity and Infrastructure Security Agency told reporters on Tuesday that the FBI is looking into robocalling incidents as well. The FBI declined to confirm this, saying in a statement: “We are aware of reports of robocalls and have no further comment. As a reminder, the FBI encourages the American public to verify any election and voting information they may receive through their local election officials.”

The use of robocalls for the purpose of political speech is broadly protected in the US, under the First Amendment’s free-speech rules. But the incidents described above may violate state or federal laws concerning election intimidation and interference. That’s particularly true if the groups that orchestrated them were acting in support of a particular campaign and targeting voters likely to fall into the other camp, says Rebecca Tushnet, a law professor at Harvard Law School.

The tricky part is tracking down the groups responsible, says Brad Reaves, an assistant professor in computer science at North Carolina State University and a member of the Wolfpack Security and Privacy Research Lab.

The source of such calls is frequently obscured as the call switches across different telecom networks with different technical protocols. But as long as the call originated in the US, the source generally can be ascertained with enough work and cooperation from the telecom companies.

In fact, late last year President Donald Trump signed into law the TRACED Act, which should make it simpler to identify the source of robocalls by creating a kind of digital fingerprint that persists across networks. Among other challenges, however, it doesn’t work on the older telecom infrastructure that plenty of carriers still have in place, and it won’t do much to clamp down on bad actors based overseas, Reaves says.

For her part, Tushnet says it’s crucial to aggressively investigate such acts, and prosecute them when appropriate. While it’s already too late to change the turnout for this year’s election, it might discourage similar practices in years to come. “We know it’s pure fraud, it’s purely bad, and there is no excuse for it,” Tushnet says. The only question is “what kind of resources should we be devoting” to stopping it.

Patrick Howell O’Neill contributed to this story.

Read more

Online platforms have made bans on political advertising a core part of their plans to mitigate the spread of disinformation around the US elections. Twitter moved early, banning political ads in October 2019. Facebook stopped accepting new ads last week and will indefinitely remove all political ads, old and new, after the polls close on Tuesday (the ban also applies to Instagram). Google and YouTube, meanwhile, will remove all political ads for “at least a week” once polls close. 

Turning off the spigot of political advertising is intended to limit the risk of sophisticated propaganda campaigns that could lead to more confusion or unrest. But that doesn’t mean you won’t hear from political groups at all: because of the way that each platform’s rules work, you’ll still be hearing plenty after the polls close, and in some cases they will still be paying to reach you. Campaigns also might need to fundraise after November 3 in the instance of legal challenges, meaning messages could keep coming for months.

For all platforms, what makes something a “political ad” is cloaked in regulatory legalese, but it generally means paid content that mentions a campaign, a candidate, the election, or social issues from any advertisers, including political action committees and nonprofit organizations.

Here are some of the routes and loopholes they’ll be using: 

Candidates themselves

Electoral candidates and campaigns will still be posting on their social media accounts. This includes personal accounts and any groups or pages related to their campaign, their party, or aligned advocacy groups. It’s likely that organizations will coordinate the sharing of those messages in an effort to get in front of audiences they previously had to pay to reach. 

If any candidate declares victory prior to official election results, Twitter and Facebook have committed to adding labels to those posts. Both companies say they will remove posts that incite violence. But there are concerns about consistent enforcement of these policies. 

Direct messages

Political texting has exploded during this election, and texts are likely to keep hitting your phone beyond Tuesday. Without social media advertising, texting is the easiest way for campaigns to mass-message people outside their supporter network. Data on mobile-phone numbers is widely accessible to both campaigns and interest groups, and the channel skirts regulations from the Federal Election Commission (FEC) around political disclosures. Text messages are also notoriously hard to fact-check: watch out for hard-to-trace texts that claim a victor. 

Emails are also a favored channel for campaign communications and will certainly continue to come in after the polls close. 

Influencers

The use of influencers for political campaigning, particularly on Instagram, has exploded in 2020, and the Biden and Bloomberg campaign both used influencers as part of their outreach strategy. Facebook has said that Instagram influencers who are paid by a campaign or other group that would usually be subject to ad restrictions are bound by its requirements around disclosure and political advertising.

Recent research indicates that disclosure does not happen consistently. Further, volunteer networks of influencer messaging are under no restrictions so long as they only volunteer intermittently, according to the FEC. Networks of celebrities and “nano-influencers” are free to post any unpaid messages, even if the messages themselves are written, designed, and coordinated by political campaigns. 

Campaign apps

Both presidential campaigns have developed apps for their supporters that allow them to send unlimited push notifications to users. The reach of the apps is obviously limited to those who have downloaded them, including many of each candidate’s base supporters. The Trump campaign app, particularly, collects a great deal of surveillance data on its users, including location and Bluetooth tracking, which could allow it to send notifications based on geographical triggers. 

Coordinated message networks

Organic networks of friends and family members are a great way for political campaigns to garner support, since they have trust and personalization built in. Campaigns and candidates are likely to continue to communicate via those networks using things like scripts and text templates to help supporters talk to their networks in private, unregulated spaces.

For example, a friend of yours might receive a text message from the Trump campaign that includes a text template meant for sharing, or from the Biden campaign that prompts people to reach out to friends with specific messaging.

Read more

The modern AI revolution began during an obscure research contest. It was 2012, the third year of the annual ImageNet competition, which challenged teams to build computer vision systems that would recognize 1,000 objects, from animals to landscapes to people.

In the first two years, the best teams had failed to reach even 75% accuracy. But in the third, a band of three researchers—a professor and his students—suddenly blew past this ceiling. They won the competition by a staggering 10.8 percentage points. That professor was Geoffrey Hinton, and the technique they used was called deep learning.

Hinton had actually been working with deep learning since the 1980s, but its effectiveness had been limited by a lack of data and computational power. His steadfast belief in the technique ultimately paid massive dividends. The fourth year of the ImageNet competition, nearly every team was using deep learning and achieving miraculous accuracy gains. Soon enough deep learning was being applied to tasks beyond image recognition, and within a broad range of industries as well.

Last year, for his foundational contributions to the field, Hinton was awarded the Turing Award, alongside other AI pioneers Yann LeCun and Yoshua Bengio. On October 20, I spoke with him at MIT Technology Review’s annual EmTech MIT conference about the state of the field and where he thinks it should be headed next.

The following has been edited and condensed for clarity.

You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?

I do believe deep learning is going to be able to do everything, but I do think there’s going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It’s now used in almost all the very best natural-language processing. We’re going to need a bunch more breakthroughs like that.

And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?

Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it’s still tiny compared to the brain.

When you say scale, do you mean bigger neural networks, more data, or both?

Both. There’s a sort of discrepancy between what happens in computer science and what happens with people. People have a huge amount of parameters compared with the amount of data they’re getting. Neural nets are surprisingly good at dealing with a rather small amount of data, with a huge numbers of parameters, but people are even better.

A lot of the people in the field believe that common sense is the next big capability to tackle. Do you agree?

I agree that that’s one of the very important things. I also think motor control is very important, and deep neural nets are now getting good at that. In particular, some recent work at Google has shown that you can do fine motor control and combine that with language, so that you can open a drawer and take out a block, and the system can tell you in natural language what it’s doing.

For things like GPT-3, which generates this wonderful text, it’s clear it must understand a lot to generate that text, but it’s not quite clear how much it understands. But if something opens the drawer and takes out a block and says, “I just opened a drawer and took out a block,” it’s hard to say it doesn’t understand what it’s doing.

The AI field has always looked to the human brain as its biggest source of inspiration, and different approaches to AI have stemmed from different theories in cognitive science. Do you believe the brain actually builds representations of the external world to understand it, or is that just a useful way of thinking about it?

A long time ago in cognitive science, there was a debate between two schools of thought. One was led by Stephen Kosslyn, and he believed that when you manipulate visual images in your mind, what you have is an array of pixels and you’re moving them around. The other school of thought was more in line with conventional AI. It said, “No, no, that’s nonsense. It’s hierarchical, structural descriptions. You have a symbolic structure in your mind, and that’s what you’re manipulating.”

I think they were both making the same mistake. Kosslyn thought we manipulated pixels because external images are made of pixels, and that’s a representation we understand. The symbol people thought we manipulated symbols because we also represent things in symbols, and that’s a representation we understand. I think that’s equally wrong. What’s inside the brain is these big vectors of neural activity.

There are some people who still believe that symbolic representation is one of the approaches for AI.

Absolutely. I have good friends like Hector Levesque, who really believes in the symbolic approach and has done great work in that. I disagree with him, but the symbolic approach is a perfectly reasonable thing to try. But my guess is in the end, we’ll realize that symbols just exist out there in the external world, and we do internal operations on big vectors.

What do you believe to be your most contrarian view on the future of AI?

Well, my problem is I have these contrarian views and then five years later, they’re mainstream. Most of my contrarian views from the 1980s are now kind of broadly accepted. It’s quite hard now to find people who disagree with them. So yeah, I’ve been sort of undermined in my contrarian views.

Read more
1 2,402 2,403 2,404 2,405 2,406 2,480