Ice Lounge Media

Ice Lounge Media

USDC stablecoin receives approval for use in Japan, says Circle

Circle said it will officially launch its stablecoin in Japan on March 26 after one of its local partners received regulatory approval to list the US dollar stablecoin three weeks ago.

USDC (USDC) will first be listed on the “SBI VC Trade” crypto exchange under a joint venture between its parent firm — Japanese financial conglomerate SBI Holdings — and Circle’s Japanese entity Circle Japan KK, Circle said in a March 24 statement.

The news comes three weeks after SBI VC Trade secured an industry-first regulatory approval on March 4 to list USDC under the Japan Financial Services Agency’s stablecoin regulatory framework.

Circle is also looking to list USDC on Binance Japan, bitbank and bitFlyer in the near future.

Japan’s bitbank and bitFlyer are two of the country’s largest crypto exchanges, having processed more than $25 million each over the last day with over 1.85 million visits to their websites in the last month.

The regulatory approval comes after two years of back-and-forth negotiations with regulators, banking partners and industry players, Circle’s Jeremy Allaire said in a March 24 X post.

“[This] unlocks tremendous opportunities not just in trading digital assets, but more broadly in payments, cross border finance and commerce, FX,” he added.

USDC stablecoin receives approval for use in Japan, says Circle

Source: Jeremy Allaire

SBI Holdings CEO and president Yoshitaka Kitao said the USDC launch would enhance financial accessibility and drive crypto innovation in Japan’s evolving digital economy.

“[This aligns] with our broader vision for the future of payments and blockchain-based finance in Japan.”

Related: Gold-backed stablecoins will outcompete USD stablecoins — Max Keiser

Meanwhile, USDC and Circle’s euro-backed EURC (EURC) stablecoin were recognized as the first stablecoins under the Dubai Financial Services Authority’s new regime on Feb. 24.

The recognition allows companies operating in the Dubai International Financial Centre — a free economic zone — to integrate the two stablecoins into a range of digital asset applications, including payments, treasury management and services.

USDC remains the second largest stablecoin by market cap at $59.7 billion, trailing only Tether’s USDT at $143.8 billion, CoinGecko data shows.

Magazine: SEC’s U-turn on crypto leaves key questions unanswered

Read more

Trump Media looks to partner with crypto.com to launch ETFs

Trump Media has signed a non-binding agreement with Crypto.com to launch a series of exchange-traded funds in the US.

Trump Technology Group Corp (TMTG) — the operator of the social media platform Truth Social and fintech brand Truth.Fi — is also part of the agreement, which is subject to regulatory approval, according to a March 24 statement from Trump Media.

The parties plan to launch the ETFs later this year through Crypto.com’s broker-dealer, Foris Capital US LLC. The ETFs will consist of digital assets and securities with a “Made in America” focus.

Crypto.com will provide the infrastructure and custody services to supply the cryptocurrencies for the ETFs, which may include a basket of tokens, including Bitcoin (BTC), Ether (ETH), Solana (SOL), XRP (XRP) and Cronos (CRO).

The parties involved expect the ETFs to be widely available internationally, including in the US, Europe and Asia across existing brokerage platforms.

”Once launched, these ETFs will be available on the Crypto.com App for our more than 140 million users around the world,” Crypto.com co-founder and CEO Kris Marszalek said.

The ETFs are anticipated to launch alongside a slate of Truth.Fi Separately Managed Accounts (SMA), which TMTG also plans to invest in with its cash reserves.

Security, Donald Trump, Crypto.com, ETF

Source: Kris Marszalek

Related: Who’s running in Trump’s race to make US a ‘Bitcoin superpower?’

The potential ETF launch would mark yet another crypto-related endeavor involving US President Donald Trump.

However, Democratic lawmakers say that conflicts of interest have already arisen between Trump’s presidential duties and the Trump Organization’s ownership of the crypto platform, World Liberty Financial, in addition to the Official Trump (TRUMP) memecoin that launched three days before he was inaugurated.

House Representative Gerald Connolly recently referred to the TRUMP token as a “money grab” that has allowed Trump-linked entities to cash in on over $100 million worth of trading fees. 

Democrat Maxine Waters also criticized Trump’s memecoin on Jan. 20, referring to it as a rug pull that represented the “worst of crypto.”

Magazine: Trump’s crypto ventures raise conflict of interest, insider trading questions

Read more
OpenAI released updates Monday for Advanced Voice Mode, its AI voice feature that enables real-time conversations in ChatGPT, to make the AI assistant more personable and interrupt users less frequently. Manuka Stratta, an OpenAI post-training researcher, announced the changes in a video posted Monday to the company’s official social media channels. OpenAI’s latest update aims […]
Read more
The Arc Prize Foundation, a nonprofit co-founded by prominent AI researcher François Chollet, announced in a blog post on Monday that it has created a new, challenging test to measure the general intelligence of leading AI models. So far, the new test, called ARC-AGI-2, has stumped most models. “Reasoning” AI models like OpenAI’s o1-pro and […]
Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

Why handing over total control to AI agents would be a huge mistake

—Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, an open source AI company.

AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems can navigate multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost?

The promise is compelling. Who doesn’t want assistance with cumbersome work or tasks there’s no time for? But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. In fact, our research suggests that agent development could be on the cusp of a very serious misstep. Read the full story.

OpenAI has released its first research into how using ChatGPT affects people’s emotional wellbeing

OpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely?

These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies. They found that while only a small subset of users engage emotionally with ChatGPT, there are some intriguing differences between how men and women respond to using the chatbot. They also found that participants who trusted and “bonded” with ChatGPT more were likelier than others to be lonely, and to rely on it more.

Chatbots powered by large language models are still a nascent technology, and difficult to study. That’s why this kind of research is an important first step toward greater insight into ChatGPT’s impact on us, which could help AI platforms enable safer and healthier interactions. Read the full story.

—Rhiannon Williams

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Genetic testing firm 23andMe has filed for bankruptcy protection
Following months of uncertainty over its future. (CNN)
+ Tens of millions of people’s genetic data could soon belong to a new owner. (WSJ $)
+ How to… delete your 23andMe data. (MIT Technology Review)

2 Europe wants to lessen its reliance of US cloud giants
But that’s easier said than done. (Wired $)

3 Anduril is considering opening a drone factory in the UK
Europe is poised to invest heavily in defense—and Anduril wants in. (Bloomberg $)
+ The company recently signed a major drone contract with the UK government. (Insider $)
+ We saw a demo of the new AI system powering Anduril’s vision for war. (MIT Technology Review)

4 Bird flu has been detected in a sheep in the UK
It’s the first known instance of the virus infecting a sheep. (FT $)
+ But the UK is yet to report any transmission to humans. (Reuters)
+ How the US is preparing for a potential bird flu pandemic. (MIT Technology Review)

5 A tiny town in the Alps has emerged as an ALS hotspot
Suggesting that its causes may be more environmental than genetic. (The Atlantic $)
+ Motor neuron diseases took their voices. AI is bringing them back. (MIT Technology Review)

6 Firefly Aerospace’s Blue Ghost lunar lander has completed its mission
And captured some pretty incredible footage along the way. (NYT $)
+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)

7 How the US could save billions of dollars in wasted energy 🪟
Ultra tough, multi-pane windows could be the answer. (WSJ $)

8 We need new ways to measure pain
Researchers are searching for objective biological indicators to get rid of the guesswork. (WP $)
+ Brain waves can tell us how much pain someone is in. (MIT Technology Review)

9 What falling in love with an AI could look like
It’s unclear whether loving machines could be training grounds for future relationships, or the future of relationships themselves. (New Yorker $)
+ The AI relationship revolution is already here. (MIT Technology Review)

10 Could you walk in a straight line for hundreds of miles?
YouTube’s favorite new challenge isn’t so much arduous as it is inconvenient. (The Guardian)

Quote of the day

“Blockbuster has collapsed. It’s time for Netflix to rise.” 

—Kian Sadeghi pitches the company they founded, DNA testing firm Nucleus Genomics, as a replacement for 23andMe in a post on X.

 The big story

This town’s mining battle reveals the contentious path to a cleaner future

January 2024

In June last year, Talon, an exploratory mining company, submitted a proposal to Minnesota state regulators to begin digging up as much as 725,000 metric tons of raw ore per year, mainly to unlock the rich and lucrative reserves of high-grade nickel in the bedrock.

Talon is striving to distance itself from the mining industry’s dirty past, portraying its plan as a clean, friendly model of modern mineral extraction. It proclaims the site will help to power a greener future for the US by producing the nickel needed to manufacture batteries for electric cars and trucks, but with low emissions and light environmental impacts.

But as the company has quickly discovered, a lot of locals aren’t eager for major mining operations near their towns. Read the full story.

—James Temple

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Who are fandoms for, and who gets to escape into them?
+ A long-lost Klimt painting of Prince William Nii Nortey Dowuona has gone on display in the Netherlands.
+ Feeling down? These feel-good movies will pick you right up.
+ Why Gen Z are dedicated followers of Old Money fashion.

Read more

AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems operate outside of a chat window, navigating multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents are developed to become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost? 

New frameworks and functionalities for AI agents are announced almost weekly, and companies promote the technology as a way to make our lives easier by completing tasks we can’t do or don’t want to do. Prominent examples include “computer use,” a function that enables Anthropic’s Claude system to act directly on your computer screen, and the “general AI agent” Manus, which can use online tools for a variety of tasks, like scouting out customers or planning trips.

These developments mark a major advance in artificial intelligence: systems designed to operate in the digital world without direct human oversight.

The promise is compelling. Who doesn’t want assistance with cumbersome work or tasks there’s no time for? Agent assistance could soon take many different forms, such as reminding you to ask a colleague about their kid’s basketball tournament or finding images for your next presentation. Within a few weeks, they’ll probably be able to make presentations for you. 

There’s also clear potential for deeply meaningful differences in people’s lives. For people with hand mobility issues or low vision, agents could complete tasks online in response to simple language commands. Agents could also coordinate simultaneous assistance across large groups of people in critical situations, such as by routing traffic to help drivers flee an area en masse as quickly as possible when disaster strikes. 

But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. Our research team at Hugging Face has spent years implementing and investigating these systems, and our recent findings suggest that agent development could be on the cusp of a very serious misstep. 

Giving up control, bit by bit

This core issue lies at the heart of what’s most exciting about AI agents: The more autonomous an AI system is, the more we cede human control. AI agents are developed to be flexible, capable of completing a diverse array of tasks that don’t have to be directly programmed. 

For many systems, this flexibility is made possible because they’re built on large language models, which are unpredictable and prone to significant (and sometimes comical) errors. When an LLM generates text in a chat interface, any errors stay confined to that conversation. But when a system can act independently and with access to multiple applications, it may perform actions we didn’t intend, such as manipulating files, impersonating users, or making unauthorized transactions. The very feature being sold—reduced human oversight—is the primary vulnerability.

To understand the overall risk-benefit landscape, it’s useful to characterize AI agent systems on a spectrum of autonomy. The lowest level consists of simple processors that have no impact on program flow, like chatbots that greet you on a company website. The highest level, fully autonomous agents, can write and execute new code without human constraints or oversight—they can take action (moving around files, changing records, communicating in email, etc.) without your asking for anything. Intermediate levels include routers, which decide which human-provided steps to take; tool callers, which run human-written functions using agent-suggested tools; and multistep agents that determine which functions to do when and how. Each represents an incremental removal of human control.

It’s clear that AI agents can be extraordinarily helpful for what we do every day. But this brings clear privacy, safety, and security concerns. Agents that help bring you up to speed on someone would require that individual’s personal information and extensive surveillance over your previous interactions, which could result in serious privacy breaches. Agents that create directions from building plans could be used by malicious actors to gain access to unauthorized areas. 

And when systems can control multiple information sources simultaneously, potential for harm explodes. For example, an agent with access to both private communications and public platforms could share personal information on social media. That information might not be true, but it would fly under the radar of traditional fact-checking mechanisms and could be amplified with further sharing to create serious reputational damage. We imagine that “It wasn’t me—it was my agent!!” will soon be a common refrain to excuse bad outcomes.

Keep the human in the loop

Historical precedent demonstrates why maintaining human oversight is critical. In 1980, computer systems falsely indicated that over 2,000 Soviet missiles were heading toward North America. This error triggered emergency procedures that brought us perilously close to catastrophe. What averted disaster was human cross-verification between different warning systems. Had decision-making been fully delegated to autonomous systems prioritizing speed over certainty, the outcome might have been catastrophic.

Some will counter that the benefits are worth the risks, but we’d argue that realizing those benefits doesn’t require surrendering complete human control. Instead, the development of AI agents must occur alongside the development of guaranteed human oversight in a way that limits the scope of what AI agents can do.

Open-source agent systems are one way to address risks, since these systems allow for greater human oversight of what systems can and cannot do. At Hugging Face we’re developing smolagents, a framework that provides sandboxed secure environments and allows developers to build agents with transparency at their core so that any independent group can verify whether there is appropriate human control. 

This approach stands in stark contrast to the prevailing trend toward increasingly complex, opaque AI systems that obscure their decision-making processes behind layers of proprietary technology, making it impossible to guarantee safety.

As we navigate the development of increasingly sophisticated AI agents, we must recognize that the most important feature of any technology isn’t increasing efficiency but fostering human well-being. 

This means creating systems that remain tools rather than decision-makers, assistants rather than replacements. Human judgment, with all its imperfections, remains the essential component in ensuring that these systems serve rather than subvert our interests.

Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, a global startup in responsible open-source AI.

Dr. Margaret Mitchell is a machine learning researcher and Chief Ethics Scientist at Hugging Face, connecting human values to technology development.

Dr. Sasha Luccioni is Climate Lead at Hugging Face, where she spearheads research, consulting and capacity-building to elevate the sustainability of AI systems. 

Dr. Avijit Ghosh is an Applied Policy Researcher at Hugging Face working at the intersection of responsible AI and policy. His research and engagement with policymakers has helped shape AI regulation and industry practices.

Dr. Giada Pistilli is a philosophy researcher working as Principal Ethicist at Hugging Face.

Read more
1 12 13 14 15 16 2,624