Ice Lounge Media

Ice Lounge Media

For much of last year, about 2,500 US service members from the 15th Marine Expeditionary Unit sailed aboard three ships throughout the Pacific, conducting training exercises in the waters off South Korea, the Philippines, India, and Indonesia. At the same time, onboard the ships, an experiment was unfolding: The Marines in the unit responsible for sorting through foreign intelligence and making their superiors aware of possible local threats were for the first time using generative AI to do it, testing a leading AI tool the Pentagon has been funding.

Two officers tell us that they used the new system to help scour thousands of pieces of open-source intelligence—nonclassified articles, reports, images, videos—collected in the various countries where they operated, and that it did so far faster than was possible with the old method of analyzing them manually. Captain Kristin Enzenauer, for instance, says she used large language models to translate and summarize foreign news sources, while Captain Will Lowdon used AI to help write the daily and weekly intelligence reports he provided to his commanders. 

“We still need to validate the sources,” says Lowdon. But the unit’s commanders encouraged the use of large language models, he says, “because they provide a lot more efficiency during a dynamic situation.”

The generative AI tools they used were built by the defense-tech company Vannevar Labs, which in November was granted a production contract worth up to $99 million by the Pentagon’s startup-oriented Defense Innovation Unit with the goal of bringing its intelligence tech to more military units. The company, founded in 2019 by veterans of the CIA and US intelligence community, joins the likes of Palantir, Anduril, and Scale AI as a major beneficiary of the US military’s embrace of artificial intelligence—not only for physical technologies like drones and autonomous vehicles but also for software that is revolutionizing how the Pentagon collects, manages, and interprets data for warfare and surveillance. 

Though the US military has been developing computer vision models and similar AI tools, like those used in Project Maven, since 2017, the use of generative AI—tools that can engage in human-like conversation like those built by Vannevar Labs—represent a newer frontier.

The company applies existing large language models, including some from OpenAI and Microsoft, and some bespoke ones of its own to troves of open-source intelligence the company has been collecting since 2021. The scale at which this data is collected is hard to comprehend (and a large part of what sets Vannevar’s products apart): terabytes of data in 80 different languages are hoovered every day in 180 countries. The company says it is able to analyze social media profiles and breach firewalls in countries like China to get hard-to-access information; it also uses nonclassified data that is difficult to get online (gathered by human operatives on the ground), as well as reports from physical sensors that covertly monitor radio waves to detect illegal shipping activities. 

Vannevar then builds AI models to translate information, detect threats, and analyze political sentiment, with the results delivered through a chatbot interface that’s not unlike ChatGPT. The aim is to provide customers with critical information on topics as varied as international fentanyl supply chains and China’s efforts to secure rare earth minerals in the Philippines. 

“Our real focus as a company,” says Scott Philips, Vannevar Labs’ chief technology officer, is to “collect data, make sense of that data, and help the US make good decisions.” 

That approach is particularly appealing to the US intelligence apparatus because for years the world has been awash in more data than human analysts can possibly interpret—a problem that contributed to the 2003 founding of Palantir, a company with a market value of over $200 billion and known for its powerful and controversial tools, including a database that helps Immigration and Customs Enforcement search for and track information on undocumented immigrants

In 2019, Vannevar saw an opportunity to use large language models, which were then new on the scene, as a novel solution to the data conundrum. The technology could enable AI not just to collect data but to actually talk through an analysis with someone interactively.

Vannevar’s tools proved useful for the deployment in the Pacific, and Enzenauer and Lowdon say that while they were instructed to always double-check the AI’s work, they didn’t find inaccuracies to be a significant issue. Enzenauer regularly used the tool to track any foreign news reports in which the unit’s exercises were mentioned and to perform sentiment analysis, detecting the emotions and opinions expressed in text. Judging whether a foreign news article reflects a threatening or friendly opinion toward the unit is a task that on previous deployments she had to do manually.

“It was mostly by hand—researching, translating, coding, and analyzing the data,” she says. “It was definitely way more time-consuming than it was when using the AI.” 

Still, Enzenauer and Lowdon say there were hiccups, some of which would affect most digital tools: The ships had spotty internet connections much of the time, limiting how quickly the AI model could synthesize foreign intelligence, especially if it involved photos or video. 

With this first test completed, the unit’s commanding officer, Colonel Sean Dynan, said on a call with reporters in February that heavier use of generative AI was coming; this experiment was “the tip of the iceberg.” 

This is indeed the direction that the entire US military is barreling toward at full speed. In December, the Pentagon said it will spend $100 million in the next two years on pilots specifically for generative AI applications. In addition to Vannevar, it’s also turning to Microsoft and Palantir, which are working together on AI models that would make use of classified data. (The US is of course not alone in this approach; notably, Israel has been using AI to sort through information and even generate lists of targets in its war in Gaza, a practice that has been widely criticized.)

Perhaps unsurprisingly, plenty of people outside the Pentagon are warning about the potential risks of this plan, including Heidy Khlaaf, who is chief AI scientist at the AI Now Institute, a research organization, and has expertise in leading safety audits for AI-powered systems. She says this rush to incorporate generative AI into military decision-making ignores more foundational flaws of the technology: “We’re already aware of how LLMs are highly inaccurate, especially in the context of safety-critical applications that require precision.” 

Khlaaf adds that even if humans are “double-checking” the work of AI, there’s little reason to think they’re capable of catching every mistake. “‘Human-in-the-loop’ is not always a meaningful mitigation,” she says. When an AI model relies on thousands of data points to come to conclusions, “it wouldn’t really be possible for a human to sift through that amount of information to determine if the AI output was erroneous.”

One particular use case that concerns her is sentiment analysis, which she argues is “a highly subjective metric that even humans would struggle to appropriately assess based on media alone.” 

If AI perceives hostility toward US forces where a human analyst would not—or if the system misses hostility that is really there—the military could make an misinformed decision or escalate a situation unnecessarily.

Sentiment analysis is indeed a task that AI has not perfected. Philips, the Vannevar CTO, says the company has built models specifically to judge whether an article is pro-US or not, but MIT Technology Review was not able to evaluate them. 

Chris Mouton, a senior engineer for RAND, recently tested how well-suited generative AI is for the task. He evaluated leading models, including OpenAI’s GPT-4 and an older version of GPT fine-tuned to do such intelligence work, on how accurately they flagged foreign content as propaganda compared with human experts. “It’s hard,” he says, noting that AI struggled to identify more subtle types of propaganda. But he adds that the models could still be useful in lots of other analysis tasks. 

Another limitation of Vannevar’s approach, Khlaaf says, is that the usefulness of open-source intelligence is debatable. Mouton says that open-source data can be “pretty extraordinary,” but Khlaaf points out that unlike classified intel gathered through reconnaissance or wiretaps, it is exposed to the open internet—making it far more susceptible to misinformation campaigns, bot networks, and deliberate manipulation, as the US Army has warned.

For Mouton, the biggest open question now is whether these generative AI technologies will be simply one investigatory tool among many that analysts use—or whether they’ll produce the subjective analysis that’s relied upon and trusted in decision-making. “This is the central debate,” he says. 

What everyone agrees is that AI models are accessible—you can just ask them a question about complex pieces of intelligence, and they’ll respond in plain language. But it’s still in dispute what imperfections will be acceptable in the name of efficiency. 

Update: This story was updated to include additional context from Heidy Khlaaf.

Read more

Bitcoin reserve bills advance in New Hampshire, Florida

New Hampshire’s House and Florida’s House insurance and banking committee have respectively advanced bills allowing their states to create Bitcoin reserves.

New Hampshire’s House passed its Bitcoin reserve bill, HB302, in a 192-179 vote on April 10 which will now head to the Senate. The state is now the fourth to pass a Bitcoin (BTC) reserve bill through one chamber, joining Arizona, Texas and Oklahoma.

If HB302 clears New Hampshire’s Senate and Governor Kelly Ayotte signs it into law it would allow the state’s treasurer to use 10% of the state’s general fund and other authorized funds to invest in precious metals and certain digital assets. The bill also sets out how they should be custodied.

The bill specifies that only cryptocurrencies with a market capitalization of over $500 billion would be eligible for investment, a criteria that only Bitcoin currently meets.

Bitcoin reserve bills advance in New Hampshire, Florida

New Hampshire’s House votes to pass HB302, the state’s Bitcoin reserve bill. Source: New Hampshire House of Representatives

In a debate prior to the vote, Democrat Representative Terry Spahr argued that the bill is unnecessary and could undermine the future security of the state’s digital assets stockpile. 

“Unbeknownst to the committee and to the sponsor […] the treasurer testified that they already have that authority,” Spahr said. He added that cryptocurrency is “constantly shifting and changing, and it’s sort of dangerous to be kind of locked into certain types of security measures, and I think that bill does this.”

Republican Representative Jordan Ulery countered that the bill was necessary as it could create the “potential for a large amount of money being earned by the state in these investments.”

New Hampshire has two other blockchain-related bills working their way through the legislature — HB310, which covers stablecoins and real-world asset tokenization (RWA) and HB 639, which deals with blockchain regulation and dispute resolution.

Florida House Committee passes Bitcoin reserve bill 

Meanwhile on April 10, Florida’s House Insurance and Banking Committee passed the state’s Bitcoin reserve bill, HB487, with a unanimous vote.

The bill has three committees to clear before it progresses to Florida’s House.

Similar to New Hampshire’s bill, HB487 would allow Florida’s chief financial officer and the State Board of Administration to invest up to 10% of certain state funds — including the General Revenue Fund and the Budget Stabilization Fund — into Bitcoin.

The bill’s sponsor, Republican Representative Webster Barnaby pleaded with the Committee before the vote “to vote up on this very important bill” which he claimed would “put Florida in the leading edge of this very new technology.”

Related: US federal agencies to report crypto holdings to Treasury by April 7

Florida’s bill gives the state’s financial chief the ability to invest in digital assets directly, through certain qualified custodians, or through exchange-traded products and details security and custody requirements.

According to Bitcoin Laws, which tracks the progress of digital assets legislation, Arizona is currently leading the race to become the first US state to establish a strategic Bitcoin reserve. 

Bitcoin reserve bills advance in New Hampshire, Florida

Source: Bitcoin Laws

On March 24, two digital assets reserve bills, SB1373 and SB1025, cleared Arizona’s House Rules Committee and are now headed to the state’s House for a full floor vote. 

If passed by the House, the bills would then need the signature of Arizona’s Democratic governor, Katie Hobbs to become law.

Magazine: Financial nihilism in crypto is over — It’s time to dream big again

Read more

SEC staff gives guidance on how securities laws could apply to crypto

US Securities and Exchange Commission staff have given guidance on how federal securities laws could apply to crypto, saying companies issuing or dealing with tokens that could be securities should give better details about their business.

The SEC’s Division of Corporation Finance said in a staff statement on April 10 that it was giving its views “to provide greater clarity on the application of the federal securities laws to crypto assets.” 

The Division said its statement was made of observations of disclosures given in existing disclosure requirements and “addresses our views about certain specific disclosure questions that market participants have presented to the staff.”

The guidance, which the Division noted had “no legal force or effect,” said crypto companies who are giving disclosures about their business have typically shared a host of information about their operations, such as what the company specifically does, how any issued tokens work and how the business generates — or intends to generate — revenue.

Companies have also disclosed whether they plan to remain engaged in a crypto network or app after they launch it and, if not, whether any other entities will take over.

Crypto firms should also explain their technology, such as if their product is a proof-of-work or proof-of-stake blockchain, its block size, transaction speed, reward mechanisms, the measures to ensure network security and whether the protocol is open-source or not.

The SEC staff also noted that registration or qualification is not required in connection with crypto offerings that aren’t securities and aren’t part of an investment contract. However, the statement didn’t provide clarity on what digital assets could be securities.

Commercial litigator Joe Carlasare told Cointelegraph the statement was “a welcome and refreshing step toward clearer regulatory guidance.”

“Adhering to the guidelines will help entities not only position themselves more favorably with regulators but also demonstrate a commitment to transparency and credibility,” he said.

Crypto firms should share all risks

The SEC staff statement said that issuers usually clearly disclose risks related to price volatility, network and cybersecurity vulnerabilities, and custody risks, in addition to standard business, operational, legal and regulatory risks.

A “materially complete description” of a security is also typically required from an issuer, which includes the mechanism behind paying dividends, distributions, profit-sharing and voting rights, including how those rights are enforced.

Related: No crypto project has registered with the SEC and ‘lived to tell the tale’ — House committee hearing

It added a company should share if a protocol’s code can be modified, and if so, who can make such changes and whether the smart contracts involved have been subjected to a third-party security audit.

Other disclosures the statement mentioned are whether the token’s supply is fixed and how it was or will be issued along with identifying executives and “significant employees.”

The Division said its guidance intended to build on the SEC’s Crypto Task Force, which is planning to host a series of roundtables with the crypto industry to discuss how it should police crypto trading, custody, tokenization and decentralized finance.

Magazine: SEC’s U-turn on crypto leaves key questions unanswered

Read more

Trump signs resolution killing IRS DeFi broker rule

Update April 11, 1:46 am: This article has been updated to include more information and background on the resolution.

US President Donald Trump has signed a joint congressional resolution overturning a Biden administration-era rule that would have required decentralized finance (DeFi) protocols to report transactions to the Internal Revenue Service.

Set to take effect in 2027, the so-called IRS DeFi broker rule would have expanded the tax authority’s existing reporting requirements to include DeFi platforms, requiring them to disclose gross proceeds from crypto sales, including information regarding taxpayers involved in the transactions.

Trump formally killed the measure by signing off the resolution on April 10, marking the first time a crypto bill has ever been signed into law, Representative Mike Carey, who backed the bill, said in a statement.

“The DeFi Broker Rule needlessly hindered American innovation, infringed on the privacy of everyday Americans, and was set to overwhelm the IRS with an overflow of new filings that it doesn’t have the infrastructure to handle during tax season,” he said.

IRS, United States, White House, Donald Trump

Source: Mike Carey

Critics of the rule claimed it would lump decentralized platforms with too onerous rules, hampering innovation in crypto and DeFi.

Supporters, such as Democratic Representative Lloyd Doggett, said in March that killing the IRS rule would create a loophole that wealthy tax cheats would exploit.

The resolution to kill the rule had quickly made its way through Congress and passed out of the House Ways and Means Committee on Feb. 25, with the House passing it on March 11.

The Senate passed the resolution on March 26. It had previously passed its own version of the resolution in early March, but the House made its own due to Constitutional rules about where budget measures should originate.

Trump was widely expected to sign the bill, as White House AI and crypto czar David Sacks said in March that the president supported killing the measure

Industry “can breathe again” with IRS rule repealed

Crypto advocacy group Blockchain Association CEO Kristin Smith said in an April 10 statement the “industry’s innovators, builders, and developers can breathe again,” now the resolution has passed. 

“This rule promised an end to the United States crypto industry; it was a sledgehammer to the engine of American innovation,” she added.

IRS, United States, White House, Donald Trump

Kristin Smith claimed the IRS rule would have destroyed the US crypto industry. Source: Blockchain Association

Related: OpenSea urges SEC to exclude NFT marketplaces from regulator’s remit

The lobby group filed a lawsuit in December against the IRS, the Treasury and then-Treasury Secretary Janet Yellen to repeal the IRS rule, claiming it was unlawful and an “unconstitutional overreach.

The Trump administration has taken a friendly attitude toward crypto and has worked to heel the Securities and Exchange Commission, which has wound back its hardline stance toward crypto forged under former Chair Gary Gensler.

The regulator has dismissed a number of enforcement actions and probes against crypto firms that it launched under the Biden administration and has begun a series of industry consultations on how it should regulate crypto.

Magazine: Memecoin degeneracy is funding groundbreaking anti-aging research

Read more
A growing number of Tesla owners are putting their used vehicles up for sale, as consumers react to Elon Musk’s political activities and the global protests they have fueled. In March, the number of used Tesla vehicles listed for sale on Autotrader.com skyrocketed, Sherwood News reported, citing data from Autotrader parent company Cox Automotive. The […]
Read more
Albert Saniger, the founder and former CEO of Nate, an AI shopping app that promised a “universal” checkout experience, was charged with defrauding investors on Wednesday, according to a press release from the U.S. Department of Justice. Founded in 2018, Nate raised over $50 million from investors like Coatue and Forerunner Ventures, most recently raising […]
Read more

The International Energy Agency states in a new report that AI could eventually reduce greenhouse-gas emissions, possibly by much more than the boom in energy-guzzling data center development pushes them up.

The finding echoes a point that prominent figures in the AI sector have made as well to justify, at least implicitly, the gigawatts’ worth of electricity demand that new data centers are placing on regional grid systems across the world. Notably, in an essay last year, OpenAI CEO Sam Altman wrote that AI will deliver “astounding triumphs,” such as “fixing the climate,” while offering the world “nearly-limitless intelligence and abundant energy.”

There are reasonable arguments to suggest that AI tools may eventually help reduce emissions, as the IEA report underscores. But what we know for sure is that they’re driving up energy demand and emissions today—especially in the regional pockets where data centers are clustering. 

So far, these facilities, which generally run around the clock, are substantially powered through natural-gas turbines, which produce significant levels of planet-warming emissions. Electricity demands are rising so fast that developers are proposing to build new gas plants and convert retired coal plants to supply the buzzy industry.

The other thing we know is that there are better, cleaner ways of powering these facilities already, including geothermal plants, nuclear reactors, hydroelectric power, and wind or solar projects coupled with significant amounts of battery storage. The trade-off is that these facilities may cost more to build or operate, or take longer to get up and running.

There’s something familiar about the suggestion that it’s okay to build data centers that run on fossil fuels today because AI tools will help the world drive down emissions eventually. It recalls the purported promise of carbon credits: that it’s fine for a company to carry on polluting at its headquarters or plants, so long as it’s also funding, say, the planting of trees that will suck up a commensurate level of carbon dioxide.

Unfortunately, we’ve seen again and again that such programs often overstate any climate benefits, doing little to alter the balance of what’s going into or coming out of the atmosphere.  

But in the case of what we might call “AI offsets,” the potential to overstate the gains may be greater, because the promised benefits wouldn’t meaningfully accrue for years or decades. Plus, there’s no market or regulatory mechanism to hold the industry accountable if it ends up building huge data centers that drive up emissions but never delivers on these climate claims. 

The IEA report outlines instances where industries are already using AI in ways that could help drive down emissions, including detecting methane leaks in oil and gas infrastructure, making power plants and manufacturing facilities more efficient, and reducing energy consumption in buildings.

AI has also shown early promise in materials discovery, helping to speed up the development of novel battery electrolytes. Some hope the technology could deliver advances in solar materials, nuclear power, or other clean energy technologies and improve climate science, extreme weather forecasting, and disaster response, as other studies have noted. 

Even without any “breakthrough discoveries,” the IEA estimates, widespread adoption of AI applications could cut emissions by 1.4 billion tons in 2035. Those reductions, “if realized,” would be as much as triple the emissions from data centers by that time, under the IEA’s most optimistic development scenario.

But that’s a very big “if.” It requires placing a lot of faith in technical advances, wide-scale deployments, and payoffs from changes in practices over the next 10 years. And there’s a big gap between how AI could be used and how it will be used, a difference that will depend a lot on economic and regulatory incentives.

Under the Trump administration, there’s little reason to believe that US companies, at least, will face much government pressure to use these tools specifically to drive down emissions. Absent the necessary policy carrots or sticks, it’s arguably more likely that the oil and gas industry will deploy AI to discover new fossil-fuel deposits than to pinpoint methane leaks.

To be clear, the IEA’s figures are a scenario, not a prediction. The authors readily acknowledged that there’s huge uncertainty on this issue, stating: “It is vital to note that there is currently no momentum that could ensure the widespread adoption of these AI applications. Therefore, their aggregate impact, even in 2035, could be marginal if the necessary enabling conditions are not created.”

In other words, we certainly can’t count on AI to drive down emissions more than it drives them up, especially within the time frame now demanded by the dangers of climate change. 

As a reminder, it’s already 2025. Rising emissions have now pushed the planet perilously close to fully tipping past 1.5 ˚C of warming, the risks from heatwaves, droughts, sea-level rise and wildfires are climbing—and global climate pollution is still going up. 

We are barreling toward midcentury, just 25 years shy of when climate models show that every industry in every nation needs to get pretty close to net-zero emissions to prevent warming from surging past 2 ˚C over preindustrial levels. And yet any new natural-gas plants built today, for data centers or any other purpose, could easily still be running 40 years from now.

Carbon dioxide stays in the atmosphere for hundreds of years. So even if the AI industry does eventually provide ways of cutting more emissions than it produces in a given year, those future reductions won’t cancel out the emissions the sector will pump out along the way—or the warming they produce.

It’s a trade-off we don’t need to make if AI companies, utilities, and regional regulators make wiser choices about how to power the data centers they’re building and running today.

Some tech and power companies are taking steps in this direction, by spurring the development of solar farms near their facilities, helping to get nuclear plants back online, or signing contracts to get new geothermal plants built. 

But such efforts should become more the rule than the exception. We no longer have the time or carbon budget to keep cranking up emissions on the promise that we’ll take care of them later.

Read more
1 42 43 44 45 46 2,678