Ice Lounge Media

Ice Lounge Media

Brightpick, a maker of autonomous mobile robots, on Tuesday announced a lofty addition to its current line. The appropriately named Giraffe system is notable for its large, retractable platform capable of reaching up to 20 feet (6 meters) in height to pick items from warehouse shelves. It’s a novel approach to warehouses with ceilings well […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

On a clear spring evening in Michigan, the stars aligned — just not in the way Upfront Ventures partner Nick Kim expected. He’d just led a $9.5 million seed round for OurSky, a software platform for space observational data, and was eager to see what its telescope partner PlaneWave Instruments could do.  But when they […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read more

The meteoric rise of DeepSeek—the Chinese AI startup now challenging global giants—has stunned observers and put the spotlight on China’s AI sector. Since ChatGPT’s debut in 2022, the country’s tech ecosystem has been in relentless pursuit of homegrown alternatives, giving rise to a wave of startups and billion-dollar bets. 

Today, the race is dominated by tech titans like Alibaba and ByteDance, alongside well-funded rivals backed by heavyweight investors. But two years into China’s generative AI boom we are seeing a shift: Smaller innovators have to carve out their own niches or risk missing out. What began as a sprint has become a high-stakes marathon—China’s AI ambitions have never been higher.

An elite group of companies known as the “Six Tigers”—Stepfun, Zhipu, Minimax, Moonshot, 01.AI, and Baichuan—are generally considered to be at the forefront of China’s AI sector. But alongside them, research-focused firms like DeepSeek and ModelBest continue to grow in influence. Some, such as Minimax and Moonshot, are giving up on costly foundational model training to hone in on building consumer-facing applications on top of others’ models. Others, like Stepfun and Infinigence AI, are doubling down on research, driven in part by US semiconductor restrictions.

We have identified these four Chinese AI companies as the ones to watch.

Stepfun

Founded in April 2023 by former Microsoft senior vice president Jiang Daxin, Stepfun emerged relatively late onto the AI startup scene, but it has quickly become a contender thanks to its portfolio of foundational models. It is also committed to building artificial general intelligence (AGI), a mission a lot of Chinese startups have given up on.

With backing from investors like Tencent and funding from Shanghai’s government, the firm released 11 foundational AI models last year—spanning language, visual, video, audio, and multimodal systems. Its biggest language model so far, Step-2, has over 1 trillion parameters (GPT-4 has about 1.8 trillion). It is currently ranked behind only ChatGPT, DeepSeek, Claude, and Gemini’s models on LiveBench, a third-party benchmark site that evaluates the capabilities of large language models.

Stepfun’s multimodal model, Step-1V, is also highly ranked for its ability to understand visual inputs on Chatbot Arena, a crowdsource platform where users can compare and rank AI models’ performance.

This company is now working with AI application developers, who are building on top of its models. According to Chinese media outlet 36Kr, demand from external developers to use Stepfun’s multimodal API surged over 45-fold in the second half of 2024.

ModelBest

Researchers at the prestigious Tsinghua University founded ModelBest in 2022 in Beijing’s Haidian district. Since then, the company has distinguished itself by leaning into efficiency and embracing the trend of small language models. Its MiniCPM series—often dubbed “Little Powerhouses” in Chinese—is engineered for on-device, real-time processing on smartphones, PCs, automotive systems, smart home devices, and even robots. Its pitch to customers is that this combination of smaller models and local data processing cuts costs and enhances privacy. 

ModelBest’s newest model, MiniCPM 3.0, has only 4 billion parameters but matches the performance of GPT-3.5 on various benchmarks. On GitHub and Hugging Face, the company’s models can be found under the profile of OpenBMB (Open Lab for Big Model Base), its open-source research lab. 

Investors have taken note: In December 2024, the company announced a new, third round of funding worth tens of millions of dollars. 

Zhipu

Also originating at Tsinghua University, Zhipu AI has grown into a company with strong ties to government and academia. The firm is developing foundational models as well as AI products based on them, including ChatGLM, a conversational model, and a video generator called Ying, which is akin to OpenAI’s Sora system. 

GLM-4-Plus, the company’s most advanced large language model to date, is trained on high-quality synthetic data, which reduces training costs, but has still matched the performance of GPT-4. The company has also developed GLM-4V-Plus, a vision model capable of interpreting web pages and videos, which represents a step toward AI with more “agentic” capabilities.

Among the cohort of new Chinese AI startups, Zhipu is the first to get on the US government’s radar. On January 15, the Biden administration revised its export control regulations, adding over 20 Chinese entities—including 10 subsidiaries of Zhipu AI—to its restricted trade list, restricting them from receiving US goods or technology for national interest reasons. The US claims Zhipu’s technology is helping China’s military, which the company denies. 

Valued at over $2 billion, Zhipu is currently one of the biggest AI startups in China and is reportedly soon planning an IPO. The company’s investors include Beijing city government-affiliated funds and various prestigious VCs.

Infinigence AI

Founded in 2023, Infinigence AI is smaller than other companies on this list, though it has still attracted $140 million in funding so far. The company focuses on infrastructure instead of model development. Its main selling point is its ability to combine chips from lots of different brands successfully to execute AI tasks, forming what’s dubbed a “heterogeneous computing cluster.” This is a unique challenge Chinese AI companies face due to US chip sanctions.

Infinigence AI claims its system could increase the effectiveness of AI training by streamlining how different chip architectures—including various models from AMD, Huawei, and Nvidia—work in synchronization.

In addition, Infinigence AI has launched its Infini-AI cloud platform, which combines multiple vendors’ products to develop and deploy models. The company says it wants to build an effective compute utilization solution “with Chinese characteristics,” and native to AI training. It claims that its training system HetHub could reduce AI models training time by 30% by optimizing the heterogeneous computing clusters Chinese companies often have.

Honorable mentions

Baichuan

While many of its competitors chase scale and expansive application ranges, Baichuan AI, founded by industry veteran Wang Xiaochuan (the founder of Sogou) in April 2023, is focused on the domestic Chinese market, targeting sectors like medical assistance and health care. 

With a valuation over $2 billion after its newest round of fundraising, Baichuan is currently among the biggest AI startups in China.

Minimax

Founded by AI veteran Yan Junjie, Minimax is best known for its product Talkie, a companion chatbot available around the world. The platform provides various characters users can chat with for emotional support or entertainment, and it had even more downloads last year than leading competitor chatbot platform Character.ai

Chinese media outlet 36Kr reported that Minimax’s revenue in 2024 was around $70 million, making it one of the most successful consumer-facing Chinese AI startups in the global market. 

Moonshot

Moonshot is best known for building Kimi, the second-most-popular AI chatbot in China, just after ByteDance’s Doubao, with over 13 million users. Released in 2023, Kimi supports input lengths of over 200,000 characters, making it a popular choice among students, white-collar workers, and others who routinely have to work with long chunks of text.

Founded by Yang Zhilin, a renowned AI researcher who studied at Tsinghua University and Carnegie Mellon University, Moonshot is backed by big tech companies, including Alibaba, and top venture capital firms. The company is valued at around $3 billion but is reportedly scaling back on its foundational model research as well as overseas product development plans, as key people leave the company.

Read more

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

How the Rubin Observatory will help us understand dark matter and dark energy

We can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature.

Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?

Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how.

—Jenna Ahart

This story is part of MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

Anthropic has a new way to protect large language models against jailbreaks

What’s new? AI firm Anthropic has developed a new line of defense against a common kind of attack called a jailbreak. A jailbreak tricks large language models (LLMs) into doing something they have been trained not to, such as help somebody create a weapon. And Anthropic’s new approach could be the strongest shield against the attacks yet.

How they did it: Jailbreaks are a kind of adversarial attack: input passed to a model that makes it produce an unexpected output. Despite a decade of research there is still no way to build a model that isn’t vulnerable. But, instead of trying to fix its models, Anthropic has developed a barrier that stops attempted jailbreaks from getting through and unwanted responses from the model getting out. Read the full story.

—Will Douglas Heaven

Three things to know as the dust settles from DeepSeek

The launch of a single new AI model does not normally cause much of a stir outside tech circles, nor does it typically spook investors enough to wipe out $1 trillion in the stock market. Now, a couple of weeks since DeepSeek’s big moment, the dust has settled a bit.

Within AI, though, what impact is DeepSeek likely to have in the longer term? Here are three seeds DeepSeek has planted that will grow even as the initial hype fades.

—James O’Donnell

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

If you’re interested in learning more about what DeepSeek’s breakout success means for the future of AI, watch this conversation between our news editor Charlotte Jee, senior AI editor Will Douglas Heaven, and China reporter Caiwei Chen. It was held at noon ET yesterday as part of our subscriber-only Roundtables series—check it out!

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk’s government allies are weighing up using AI to cut costs  
As part of Musk’s plans to gut federal contracts across the board. (NYT $)
+ A 25-year old engineer now has access to the US’s top secret systems. (Wired $)
+ Staffers for the US agency that sends aid to the world’s neediest have been locked out of their email accounts. (NY Mag $)
+ Such measures would have been unthinkable just a few short years ago. (Vox)
+ Palantir CEO Alex Karp is a fan of ‘DOGE’. (Insider $)

2 China has announced its own tariffs on US imports
Sparking new fears of a full-blown trade war. (FT $)
+ The days of cheap Chinese shopping in the US could be coming to an end. (NY Mag $)
+ Here’s what Trump’s tariffs mean for the likes of Temu and Shein. (The Information $)

3 US senators blame Silicon Valley for DeepSeek’s runaway success
Big Tech’s lobbying for softer export controls created corporate loopholes, they claim. (WP $)
+ The rise of DeepSeek doesn’t mean the controls have failed, according to ASML. (WSJ $)
+ How a top Chinese AI model overcame US sanctions. (MIT Technology Review)

4 Meta says it won’t release AI systems it deems too risky
But how that risk is measured is up to Meta. (TechCrunch)
+ A new public database lists all the ways AI could go wrong. (MIT Technology Review)

5 Gender affirming care is under major threat in the US
Advocates fear Trump’s executive order will prevent many people from accessing lifesaving treatments. (Undark)
+ Many hospitals are continuing to offer their services, though. (Axios)+ New York’s Attorney General says pausing such care could violate state law. (The Hill)

6 The App Store is now hosting its first porn app
And Apple is not happy about it. (Reuters)
+ The company has an EU antitrust law to thank. (WP $)

7 The Doomsday Clock has been given a makeover 🕔
We are now 89 seconds away from the end of the world. (Fast Company $)

8 Meet the UK’s AI grandmother wasting scammers’ time
Fraudsters have been left frustrated by the bot’s dithering. (The Guardian)
+ The people using humour to troll their spam texts. (MIT Technology Review)

9 We still don’t know much about Mars’ moons
But a new mission could change that. (New Scientist $)

10 Mark Zuckerberg’s famous hoodie is up for auction
If you’re so inclined to want to own a piece of nerd history. (Insider $)

Quote of the day

“It’ll scare people, it’ll make people think that the industry is a scam.”

—Anthony Scaramucci, Donald Trump’s former communications director, doesn’t think much of his former boss’s memecoin, he tells the Financial Times.

The big story

The open-source AI boom is built on Big Tech’s handouts. How long will it last?

May 2023

In May 2023 a leaked memo reported to have been written by Luke Sernau, a senior engineer at Google, said out loud what many in Silicon Valley must have been whispering for weeks: an open-source free-for-all is threatening Big Tech’s grip on AI.

New open-source large language models—alternatives to Google’s Bard or OpenAI’s ChatGPT that researchers and app developers can study, build on, and modify—are dropping like candy from a piñata. These are smaller, cheaper versions of the best-in-class AI models created by the big firms that (almost) match them in performance—and they’re shared for free.

In many ways, that’s a good thing. AI won’t thrive if just a few mega-rich companies get to gatekeep this technology or decide how it is used. But this open-source boom is precarious, and if Big Tech decides to shut up shop, a boomtown could become a backwater. Read the full story.

—Will Douglas Heaven

We can still have nice things

A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.)

+ Today would have been the 112th birthday of Rosa Parks, the civil activist who changed the course of history.
+ If you’re planning a spring break, consider this well-timed inspiration.
+ A Buffy the Vampire Slayer reboot is reportedly in the works.
+ Rise up, daughters of grunge!

Read more

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

The launch of a single new AI model does not normally cause much of a stir outside tech circles, nor does it typically spook investors enough to wipe out $1 trillion in the stock market. Now, a couple of weeks since DeepSeek’s big moment, the dust has settled a bit. The news cycle has moved on to calmer things, like the dismantling of long-standing US federal programs, the purging of research and data sets to comply with recent executive orders, and the possible fallouts from President Trump’s new tariffs on Canada, Mexico, and China.

Within AI, though, what impact is DeepSeek likely to have in the longer term? Here are three seeds DeepSeek has planted that will grow even as the initial hype fades.

First, it’s forcing a debate about how much energy AI models should be allowed to use up in pursuit of better answers. 

You may have heard (including from me) that DeepSeek is energy efficient. That’s true for its training phase, but for inference, which is when you actually ask the model something and it produces an answer, it’s complicated. It uses a chain-of-thought technique, which breaks down complex questions–-like whether it’s ever okay to lie to protect someone’s feelings—into chunks, and then logically answers each one. The method allows models like DeepSeek to do better at math, logic, coding, and more. 

The problem, at least to some, is that this way of “thinking” uses up a lot more electricity than the AI we’ve been used to. Though AI is responsible for a small slice of total global emissions right now, there is increasing political support to radically increase the amount of energy going toward AI. Whether or not the energy intensity of chain-of-thought models is worth it, of course, depends on what we’re using the AI for. Scientific research to cure the world’s worst diseases seems worthy. Generating AI slop? Less so. 

Some experts worry that the impressiveness of DeepSeek will lead companies to incorporate it into lots of apps and devices, and that users will ping it for scenarios that don’t call for it. (Asking DeepSeek to explain Einstein’s theory of relativity is a waste, for example, since it doesn’t require logical reasoning steps, and any typical AI chat model can do it with less time and energy.) Read more from me here

Second, DeepSeek made some creative advancements in how it trains, and other companies are likely to follow its lead. 

Advanced AI models don’t just learn on lots of text, images, and video. They rely heavily on humans to clean that data, annotate it, and help the AI pick better responses, often for paltry wages. 

One way human workers are involved is through a technique called reinforcement learning with human feedback. The model generates an answer, human evaluators score that answer, and those scores are used to improve the model. OpenAI pioneered this technique, though it’s now used widely by the industry. 

As my colleague Will Douglas Heaven reports, DeepSeek did something different: It figured out a way to automate this process of scoring and reinforcement learning. “Skipping or cutting down on human feedback—that’s a big thing,” Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel, told him. “You’re almost completely training models without humans needing to do the labor.” 

It works particularly well for subjects like math and coding, but not so well for others, so workers are still relied upon. Still, DeepSeek then went one step further and used techniques reminiscent of how Google DeepMind trained its AI model back in 2016 to excel at the game Go, essentially having it map out possible moves and evaluate their outcomes. These steps forward, especially since they are outlined broadly in DeepSeek’s open-source documentation, are sure to be followed by other companies. Read more from Will Douglas Heaven here

Third, its success will fuel a key debate: Can you push for AI research to be open for all to see and push for US competitiveness against China at the same time?

Long before DeepSeek released its model for free, certain AI companies were arguing that the industry needs to be an open book. If researchers subscribed to certain open-source principles and showed their work, they argued, the global race to develop superintelligent AI could be treated like a scientific effort for public good, and the power of any one actor would be checked by other participants.

It’s a nice idea. Meta has largely spoken in support of that vision, and venture capitalist Marc Andreessen has said that open-source approaches can be more effective at keeping AI safe than government regulation. OpenAI has been on the opposite side of that argument, keeping its models closed off on the grounds that it can help keep them out of the hands of bad actors. 

DeepSeek has made those narratives a bit messier. “We have been on the wrong side of history here and need to figure out a different open-source strategy,” OpenAI’s Sam Altman said in a Reddit AMA on Friday, which is surprising given OpenAI’s past stance. Others, including President Trump, doubled down on the need to make the US more competitive on AI, seeing DeepSeek’s success as a wake-up call. Dario Amodei, a founder of Anthropic, said it’s a reminder that the US needs to tightly control which types of advanced chips make their way to China in the coming years, and some lawmakers are pushing the same point. 

The coming months, and future launches from DeepSeek and others, will stress-test every single one of these arguments. 


Now read the rest of The Algorithm

Deeper Learning

OpenAI launches a research tool

On Sunday, OpenAI launched a tool called Deep Research. You can give it a complex question to look into, and it will spend up to 30 minutes reading sources, compiling information, and writing a report for you. It’s brand new, and we haven’t tested the quality of its outputs yet. Since its computations take so much time (and therefore energy), right now it’s only available to users with OpenAI’s paid Pro tier ($200 per month) and limits the number of queries they can make per month. 

Why it matters: AI companies have been competing to build useful “agents” that can do things on your behalf. On January 23, OpenAI launched an agent called Operator that could use your computer for you to do things like book restaurants or check out flight options. The new research tool signals that OpenAI is not just trying to make these mundane online tasks slightly easier; it wants to position AI as able to handle  professional research tasks. It claims that Deep Research “accomplishes in tens of minutes what would take a human many hours.” Time will tell if users will find it worth the high costs and the risk of including wrong information. Read more from Rhiannon Williams

Bits and Bytes

Déjà vu: Elon Musk takes his Twitter takeover tactics to Washington

Federal agencies have offered exits to millions of employees and tested the prowess of engineers—just like when Elon Musk bought Twitter. The similarities have been uncanny. (The New York Times)

AI’s use in art and movies gets a boost from the Copyright Office

The US Copyright Office finds that art produced with the help of AI should be eligible for copyright protection under existing law in most cases, but wholly AI-generated works probably are not. What will that mean? (The Washington Post)

OpenAI releases its new o3-mini reasoning model for free

OpenAI just released a reasoning model that’s faster, cheaper, and more accurate than its predecessor. (MIT Technology Review)

Anthropic has a new way to protect large language models against jailbreaks

This line of defense could be the strongest yet. But no shield is perfect. (MIT Technology Review). 

Read more

MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here.

We can put a good figure on how much we know about the universe: 5%. That’s how much of what’s floating about in the cosmos is ordinary matter—planets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature. 

Cosmologists have cast dark matter as the hidden glue binding galaxies together. Dark energy plays an opposite role, ripping the fabric of space apart. Neither emits, absorbs, or reflects light, rendering them effectively invisible. So rather than directly observing either of them, astronomers must carefully trace the imprint they leave behind. 

Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questions—critically, what exactly are they?

Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Here’s a look at how.

Untangling dark matter’s web

In the 1930s, the Swiss astronomer Fritz Zwicky proposed the existence of an unseen force named dunkle Materie—in English, dark matter—after studying a group of galaxies called the Coma Cluster. Zwicky found that the galaxies were traveling too quickly to be contained by their joint gravity and decided there must be a missing, unobservable mass holding the cluster together.

Zwicky’s theory was initially met with much skepticism. But in the 1970s an American astronomer, Vera Rubin, obtained evidence that significantly strengthened the idea. Rubin studied the rotation rates of 60 individual galaxies and found that if a galaxy had only the mass we’re able to observe, that wouldn’t be enough to contain its structure; its spinning motion would send it ripping apart and sailing into space. 

Rubin’s results helped sell the idea of dark matter to the scientific community, since an unseen force seemed to be the only explanation for these spiraling galaxies’ breakneck spin speeds. “It wasn’t necessarily a smoking-gun discovery,” says Marc Kamionkowski, a theoretical physicist at Johns Hopkins University. “But she saw a need for dark matter. And other people began seeing it too.”

Evidence for dark matter only grew stronger in the ensuing decades. But sorting out what might be behind its effects proved tricky. Various subatomic particles were proposed. Some scientists posited that the phenomena supposedly generated by dark matter could also be explained by modifications to our theory of gravity. But so far the hunt, which has employed telescopes, particle colliders, and underground detectors, has failed to identify the culprit. 

The Rubin observatory’s main tool for investigating dark matter will be gravitational lensing, an observational technique that’s been used since the late ’70s. As light from distant galaxies travels to Earth, intervening dark matter distorts its image—like a cosmic magnifying glass. By measuring how the light is bent, astronomers can reverse-engineer a map of dark matter’s distribution. 

Other observatories, like the Hubble Space Telescope and the James Webb Space Telescope, have already begun stitching together this map from their images of galaxies. But Rubin plans to do so with exceptional precision and scale, analyzing the shapes of billions of galaxies rather than the hundreds of millions that current telescopes observe, according to Andrés Alejandro Plazas Malagón, Rubin operations scientist at SLAC National Laboratory. “We’re going to have the widest galaxy survey so far,” Plazas Malagón says.

Capturing the cosmos in such high definition requires Rubin’s 3.2-billion-pixel Large Synoptic Survey Telescope (LSST). The LSST boasts the largest focal plane ever built for astronomy, granting it access to large patches of the sky. 

The telescope is also designed to reorient its gaze every 34 seconds, meaning astronomers will be able to scan the entire sky every three nights. The LSST will revisit each galaxy about 800 times throughout its tenure, says Steven Ritz, a Rubin project scientist at the University of California, Santa Cruz. The repeat exposures will let Rubin team members more precisely measure how the galaxies are distorted, refining their map of dark matter’s web. “We’re going to see these galaxies deeply and frequently,” Ritz says. “That’s the power of Rubin: the sheer grasp of being able to see the universe in detail and on repeat.”

The ultimate goal is to overlay this map on different models of dark matter and examine the results. The leading idea, the cold dark matter model, suggests that dark matter moves slowly compared to the speed of light and interacts with ordinary matter only through gravity. Other models suggest different behavior. Each comes with its own picture of how dark matter should clump in halos surrounding galaxies. By plotting its chart of dark matter against what those models predict, Rubin might exclude some theories and favor others. 

A cosmic tug of war

If dark matter lies on one side of a magnet, pulling matter together, then you’ll flip it over to find dark energy, pushing it apart. “You can think of it as a cosmic tug of war,” Plazas Malagón says.

Dark energy was discovered in the late 1990s, when astronomers found that the universe was not only expanding, but doing so at an accelerating rate, with galaxies moving away from one another at higher and higher speeds. 

“The expectation was that the relative velocity between any two galaxies should have been decreasing,” Kamionkowski says. “This cosmological expansion requires something that acts like antigravity.” Astronomers quickly decided there must be another unseen factor inflating the fabric of space and pegged it as dark matter’s cosmic foil. 

So far, dark energy has been observed primarily through Type Ia supernovas, a special breed of explosion that occurs when a white dwarf star accumulates too much mass. Because these supernovas all tend to have the same peak in luminosity, astronomers can gauge how far away they are by measuring how bright they appear from Earth. Paired with a measure of how fast they are moving, this data clues astronomers in on the universe’s expansion rate. 

Rubin will continue studying dark energy with high-resolution glimpses of Type Ia supernovas. But it also plans to retell dark energy’s cosmic history through gravitational lensing. Because light doesn’t travel instantaneously, when we peer into distant galaxies, we’re really looking at relics from millions to billions of years ago—however long it takes for their light to make the lengthy trek to Earth. Astronomers can effectively use Rubin as a makeshift time machine to see how dark energy has carved out the shape of the universe. 

“These are the types of questions that we want to ask: Is dark energy a constant? If not, is it evolving with time? How is it changing the distribution of dark matter in the universe?” Plazas Malagón says.

If dark energy was weaker in the past, astronomers expect to see galaxies grouped even more densely into galaxy clusters. “It’s like urban sprawl—these huge conglomerates of matter,” Ritz says. Meanwhile, if dark energy was stronger, it would have pushed galaxies away from one another, creating a more “rural” landscape. 

Researchers will be able to use Rubin’s maps of dark matter and the 3D distribution of galaxies to plot out how the structure of the universe changed over time, unveiling the role of dark energy and, they hope, helping scientists evaluate the different theories to account for its behavior. 

Of course, Rubin has a lengthier list of goals to check off. Some top items entail tracing the structure of the Milky Way, cataloguing cosmic explosions, and observing asteroids and comets. But since the observatory was first conceptualized in the early ’90s, its core goal has been to explore this hidden branch of the universe. After all, before a 2019 act of Congress dedicated the observatory to Vera Rubin, it was simply called the Dark Matter Telescope. 

Rubin isn’t alone in the hunt, though. In 2023, the European Space Agency launched the Euclid telescope into space to study how dark matter and dark energy have shaped the structure of the cosmos. And NASA’s Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, has similar plans to measure the universe’s expansion rate and chart large-scale distributions of dark matter. Both also aim to tackle that looming question: What makes up this invisible empire?

Rubin will test its systems throughout most of 2025 and plans to begin the LSST survey late this year or in early 2026. Twelve to 14 months later, the team expects to reveal its first data set. Then we might finally begin to know exactly how Rubin will light up the dark universe. 

Read more
1 111 112 113 114 115 2,656