Archive for Opinions – Page 14

How this year’s Nobel winners changed the thinking on economic growth

By Antonio Navas, University of Sheffield 

What makes some countries rich and others poor? Is there any action a country can take to improve living standards for its citizens? Economists have wondered about this for centuries. If the answer to the second question is yes, then the impact on people’s lives could be staggering.

This year’s Sveriges Riksbank Prize in Economic Sciences (commonly known as the Nobel prize for economics) has gone to three researchers who have provided answers to these questions: Philippe Aghion, Peter Howitt and Joel Mokyr.

For most of human history, economic stagnation has been the norm – modern economic growth is very recent from a historical point of view. This year’s winners have been honoured for their contributions towards explaining how to achieve sustained economic growth.

At the beginning of the 1980s, theories around economic growth were largely dominated by the works of American economist Robert Solow. An important conclusion emerged: in the long-run, per-capita income growth is determined by technological progress.

Solow’s framework, however, did not explain how technology accumulates over time, nor the role of institutions and policies in boosting it. As such, the theory can neither explain why countries grow differently for sustained periods nor what kind of policies could help a country improve its long-run growth performance.

It’s possible to argue that technological innovation comes from the work of scientists, who are motivated less by money than the rest of society might be. As such, there would be little that countries could do to intervene – technological innovations would be the result of the scientists’ own interests and motivations.

But that thinking changed with the emergence of endogenous growth theory, which aims to explain which forces drive innovation. This includes the works of Paul Romer, Nobel prizewinner in 2018, as well as this year’s winners Aghion and Howitt.

These three authors advocate for theories in which technological progress ultimately derives from firms trying to create new products (Romer) or improve the quality of existing products (Aghion and Howitt). For firms to try to break new ground, they need to have the right incentives.

Creative destruction

While Romer recognises the importance of intellectual property rights to reward firms financially for creating new products, the framework of Aghion and Howitt outlines the importance of something known as “creative destruction”.

This is where innovation results from a battle between firms trying to get the best-quality products to meet consumer needs. In their framework, a new innovation means the displacement of an existing one.

In their basic model, protecting intellectual property is important in order to reward firms for innovating. But at the same time, innovations do not come from leaders but from new entrants to the industry. Incumbents do not have the same incentive to innovate because it will not improve their position in the sector. Consequently, too much protection generates barriers to entry and may slow growth.

But what is less explored in their work is the idea that each innovation brings winners (consumers and innovative firms) and losers (firms and workers under the old, displaced technology). These tensions could shape a country’s destiny in terms of growth – as other works have pointed out, the owners of the old technology may try to block innovation.

This is where Mokyr complements these works perfectly by providing a historical context. Mokyr’s work focuses on the origins of the Industrial Revolution and also the history of technological progress from ancient times until today.

Mokyr noted that while scientific discoveries were behind technological progress, a scientific discovery was not a guarantee of technological advances.

It was only when the modern world started to apply the knowledge discovered by scientists to problems that would improve people’s lives that humans saw sustained growth. In Mokyr’s book The Gifts of Athena, he argues that the Enlightenment was behind the change in scientists’ motivations.

illustrated headshots of the 2025 nobel prizewinners in economics.
The 2025 winners Joel Mokyr, Philippe Aghion and Peter Howitt.
Ill. Niklas Elmehed © Nobel Prize Outreach

In Mokyr’s works, for growth to be sustained it is vital that knowledge flows and accumulates. This was the spirit embedded in the Industrial Revolution and it’s what fostered the creation of the institution I am working in – the University of Sheffield, which enjoyed financial support from the steel industry in the 19th century.

Mokyr’s later works emphasise the key role of a culture of knowledge in order for growth to improve living standards. As such, openness to new ideas becomes crucial.

Similarly, Aghion and Howitt’s framework has become a standard tool in economics. It has been used to explore many important questions for human wellbeing: the relationship between competition and innovation, unemployment and growth, growth and income inequality, and globalisation, among many other topics.

Analysis using their framework still has an impact on our lives today. It is present in policy debates around big data, artificial intelligence and green innovation. And Mokyr’s analysis of how knowledge accumulates poses a central question around what countries can do to encourage an innovation ecosystem and improve the lives of their citizens.

But this year’s prize is also a warning about the consequences of damaging the engines of growth. Scientists collaborating with firms to advance living standards is the ultimate elixir for growth. Undermining science, globalisation and competition might not be the right recipe.The Conversation

About the Author:

Antonio Navas, Senior Lecturer in Economics, University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Energy Co. to Combine With Semiconductor Co. to Create AI Infrastructure

Source: Streetwise Reports (10/10/25)

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) says it has signed a non-binding Letter of Intent (LOI) for a proposed all-stock business combination with Smartkem Inc. (SMTK:Nasdaq). Find out the terms of the proposed merger.

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) announced it has signed a non-binding Letter of Intent (LOI) dated October 6, 2025, with Smartkem Inc. (SMTK:Nasdaq), a company pioneering a new class of organic semiconductor technology, for a proposed all-stock business combination, according to a release.

If finalized, the Proposed Transaction would create a Nasdaq-listed, U.S.-owned and controlled artificial intelligence (AI) infrastructure company, merging low-cost domestic energy with advanced semiconductor packaging and materials to meet the rising demand for AI compute capacity.

JEV said it is strategically positioned at the crossroads of energy and AI, utilizing its robust energy framework and renewable innovations to provide reliable, cost-effective power for AI data centers.

The proposed transaction aims to integrate Smartkem’s patented organic semiconductor platform into Jericho’s infrastructure to accelerate: energy-efficient AI data centers designed for next-generation workloads, advanced AI chip packaging that minimizes power consumption and heat, low-power optical data transmission for faster interconnects, and conformable sensors for environmental monitoring and operational resilience, Jericho noted in the release.

“AI compute growth is driving unprecedented demand for U.S. power and infrastructure,” Jericho Chief Executive Officer Brian Williamson said. “By combining JEV’s scalable energy platform with Smartkem’s semiconductor breakthroughs, we can deliver a new generation of faster, efficient, and more resilient AI data centers.”

Ian Jenks, chairman and CEO of Smartkem, added, “This proposed transaction positions Smartkem’s technology at the center of the largest technology build-out of our era. We believe this combination provides the pathway for our patented materials to reach their full commercial potential inside next-generation AI infrastructure.”

“Together, JEV and Smartkem are developing a unified U.S. platform for AI data centers that pairs energy resilience with advanced semiconductors, a vertically integrated strategy aimed at driving sustainable growth and creating value for shareholders,” said Anthony Amato, strategic advisor to Smartkem.

According to Jericho, some highlights of the proposed transaction include establishing a fully integrated platform covering energy supply and AI data center infrastructure and positioning the combined company to capitalize on the forecasted growth in U.S. power demand for AI data centers.

The combination of JEV’s scalable energy and infrastructure expertise with Smartkem’s patented organic semiconductor materials and OTFT technologies will drive innovation and enhance data center efficiency, JEV said.

The transaction “ensures strategic technology assets are developed, deployed, and scaled under U.S. ownership for global AI infrastructure partners,” the release said.

It also combines two experience management teams “focused on commercializing disruptive innovations at scale.”

Terms of the Proposed Transaction

Under the LOI, the proposed transaction is structured as an all-stock business combination, executed through either a share exchange or statutory merger, Jericho said. In this arrangement, Smartkem would be the surviving legal entity and continue as a publicly listed company on The Nasdaq Stock Market, becoming the “combined company.”

Upon closing, Jericho stockholders would own 65%, while Smartkem stockholders, prior to the transaction, would own 35% of the fully diluted equity securities of the combined company, subject to certain adjustments.

Brian Williamson, currently the CEO of Jericho, would assume the role of CEO for the combined company, according to the release. The board of directors would be reconstituted to include a majority of members designated by Jericho, in compliance with Nasdaq and SEC requirements.

Both companies will require significant additional capital to negotiate the proposed transaction, obtain necessary stockholder approvals, and complete the transaction. Closing is contingent on several conditions, including negotiating a definitive agreement, satisfactory due diligence, board and stockholder approvals, and Nasdaq’s approval for continued listing.

Smartkem and Jericho have agreed to a 60-day exclusivity period to negotiate the terms of a definitive agreement. This period can be terminated by either party under certain conditions, including if Smartkem does not purchase Jericho common shares valued at least US$500,000 by November 30, 2025. While the LOI is active, Smartkem will purchase Jericho common shares from treasury, subject to certain conditions.

The transaction terms outlined in the LOI are expected to be replaced by a definitive agreement. The final legal structure may be adjusted based on tax, corporate, securities, and accounting considerations.

About Smartkem

Smartkem is revolutionizing electronics with a new class of transistors developed using its proprietary semiconductor materials, Jericho said in the release. Its TRUFLEX® semiconductor polymers enable low-temperature printing processes compatible with existing manufacturing infrastructure, delivering low-cost, high-performance displays. The platform is applicable in various display technologies, including MicroLED, LCD, and AMOLED, as well as advanced computer and AI chip packaging, sensors, and logic.

Smartkem designs and develops its materials at its R&D facility in Manchester, U.K., and offers prototyping services at the Centre for Process Innovation (CPI) in Sedgefield, U.K. It also operates a field application office in Hsinchu, Taiwan, near its collaboration partner, The Industrial Technology Research Institute (ITRI).

Smartkem is developing a commercial-scale production process and Electronic Design Automation (EDA) tools to demonstrate the commercial viability of manufacturing a new generation of displays using its materials.

The company holds an extensive IP portfolio, including 140 granted patents across 17 patent families, 14 pending patents, and 40 codified trade secrets. For more information, visit the Smartkem website or follow them on LinkedIn.

JEV’s Data Center Initiative

Earlier this year, Jericho launched its data center initiative, strategically leveraging its expansive 41,000-acre portfolio of active oil and gas joint venture properties in Oklahoma. By harnessing abundant, low-cost on-site natural gas, JEV is transforming its energy assets into secure, scalable, high-performance AI computing hubs tailored for the AI era.

JEV’s build-to-suit (BTS) data centers capitalize on the company’s extensive network of over 60 miles of gas, power, and water infrastructure, along with prime positioning on a U.S. fiber “superhighway,” to offer unparalleled connectivity and performance.

In July, Jericho announced a memorandum of understanding (MOU) with M2 Development Solutions LLC to accelerate the development of AI data centers across the United States. Finalized on July 6, the agreement expands Jericho’s reach beyond its Oklahoma asset base into Ohio and Nevada, utilizing M2’s large-scale development sites.

The Ohio location spans 400 acres and includes access to utility power and on-site natural gas power generation assets. In Nevada, the 3,700-acre site offers a diverse energy mix, including utility power access, on-site geothermal and solar capabilities, and natural gas-fed power generation. These features provide energy diversification options at a scale suitable for AI data center operations, which demand substantial and reliable power sources.

“Our partnership with M2 is a transformative step in executing our AI data center strategy,” said Williamson at the time. “Integrating M2’s gigawatt-scale sites accelerates our ability to deliver scalable, energy-efficient infrastructure for modern AI workloads.”

The Catalyst: We’re Consuming More Electricity Than Ever

In a significant shift from nearly two decades of stagnant U.S. load growth, Americans are now consuming more electricity than ever, according to a report by ICF International. The rapid expansion of data centers to support AI technology, along with a surge in new manufacturing and oil and gas production, is driving a notable increase in industrial electricity demand.

Additionally, electric vehicles, heat pumps, and other energy-intensive products are further contributing to this growth. ICF’s analysis suggests that U.S. electricity demand is expected to rise by 25% by 2030 and by 78% by 2050, compared to 2023 levels. This surge in demand has significant implications for the reliability and affordability of electricity. For residential customers, electricity rates could increase by 15% to 40% by 2030, depending on the market. By 2050, some rates might even double.

Streetwise Ownership Overview*

Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA)


In a piece for U.S. Global Investors dated July 25, Frank Holmes compared the current AI advancements to the scale and ambition of the defense expansion during the Reagan era or the shale boom of the 2010s.

According to Grand View Research, the global data center market size was estimated at US$347.6 billion in 2024 and is projected to reach US$652.01 billion by 2030, growing at a compound annual growth rate (CAGR) of 11.2% from 2025 to 2030. “The rapid adoption of digital transformation initiatives, cloud computing, and emerging technologies such as artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) have substantially increased demand,” Holmes noted.

Ownership and Share Structure

Around 41% of Jericho’s shares are held by management and insiders, the company said. They include CEO Brian Williamson, who owns 1.38%; founder Allen Wilson, who owns 0.99%; and board member Nicholas Baxter, who owns 0.49%; according to Refinitiv’s latest research.

Around 34% of shares are held by the company’s “Top 10 external shareholders.” The rest is in retail.

JEV’s market cap is CA$35.07 million, and it trades in a 52-week range of CA$0.08 and CA$0.21. It has 304.03 million shares outstanding, about 220.98 million floating.

 

Important Disclosures:

  1. As of the date of this article, officers and/or employees of Streetwise Reports LLC (including members of their household) own securities of Jericho Energy Ventures Inc.
  2. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  3. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports’ terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago

By Cameron Shackell, Queensland University of Technology 

The electrification boom of the 1920s set the United States up for a century of industrial dominance and powered a global economic revolution.

But before electricity faded from a red-hot tech sector into invisible infrastructure, the world went through profound social change, a speculative bubble, a stock market crash, mass unemployment and a decade of global turmoil.

Understanding this history matters now. Artificial intelligence (AI) is a similar general purpose technology and looks set to reshape every aspect of the economy. But it’s already showing some of the hallmarks of electricity’s rise, peak and bust in the decade known as the Roaring Twenties.

The reckoning that followed could be about to repeat.

A crowd gathers outside the New York Stock Exchange following the ‘Great Crash’ of October 1929.
New York World-Telegram and the Sun Newspaper Photograph Collection, US Library of Congress

First came the electricity boom

A century ago, when people at the New York Stock Exchange talked about the latest “high tech” investments, they were talking about electricity.

Investors poured money into suppliers such as Electric Bond & Share and Commonwealth Edison, as well as companies using electricity in new ways, such as General Electric (for appliances), AT&T (telecommunications) and RCA (radio).

It wasn’t a hard sell. Electricity brought modern movies, new magazines from faster printing presses, and evenings by the radio.

It was also an obvious economic game changer, promising automation, higher productivity, and a future full of leisure and consumption. In 1920, even Soviet revolutionary leader Vladimir Lenin declared: “Communism is Soviet power plus the electrification of the whole country.”

Today, a similar global urgency grips both communist and capitalist countries about AI, not least because of military applications.

Then came the peak

Like AI stocks now, electricity stocks “became favorites in the boom even though their fundamentals were difficult to assess”.

Market power was concentrated. Big players used complex holding structures to dodge rules and sell shares in basically the same companies to the public under different names.

US finance professor Harold Bierman, who argued that attempts to regulate overpriced utility stocks were a direct trigger for the crash, estimated that utilities made up 18% of the New York Stock Exchange in September 1929. Within electricity supply, 80% of the market was owned by just a handful of holding firms.

But that’s just the utilities. As today with AI, there was a much larger ecosystem.

Almost every 1920s “megacap” (the largest companies at the time) owed something to electrification. General Motors, for example, had overtaken Ford using new electric production techniques.

Essentially, electricity became the backdrop to the market in the same way AI is doing, as businesses work to become “AI-enabled”.

No wonder that today tech giants command over a third of the S&P 500 index and nearly three-quarters of the NASDAQ. Transformative technology drives not only economic growth, but also extreme market concentration.

In 1929, to reflect the new sector’s importance, Dow Jones launched the last of its three great stock averages: the electricity-heavy Dow Jones Utilities Average.

But then came the bust

The Dow Jones Utilities Average went as high as 144 in 1929. But by 1934, it had collapsed to just 17.

No single cause explains the New York Stock Exchange’s unprecedented “Great Crash”, which began on October 24 1929 and preceded the worldwide Great Depression.

That crash triggered a banking crisis, credit collapse, business failures, and a drastic fall in production. Unemployment soared from just 3% to 25% of US workers by 1933 and stayed in double figures until the US entered the second world war in 1941.

Lithograph of Wall Street, New York City, with panicked crowd, lightning, people jumping out of buildings, buildings falling, at time of stock market crash in 1929.
Lithograph of Wall Street, New York City, after the 1929 stock market crash. Jame Rosenberg, Ben and Beatrice Goldstein Foundation collection, US Library of Congress

The ripple effects were global, with most countries seeing a rise in unemployment, especially in countries reliant on international trade, such as Chile, Australia and Canada, as well as Germany.

The promised age of shorter hours and electric leisure turned into soup kitchens and bread lines.

The collapse exposed fraud and excess. Electricity entrepreneur Samuel Insull, once Thomas Edison’s protégé and builder of Chicago’s Commonwealth Edison, was at one point worth US$150 million – an even more staggering amount at the time.

But after Insull’s empire went bankrupt in 1932, he was indicted for embezzlement and larceny. He fled overseas, was brought back, and eventually acquitted – but 600,000 shareholders and 500,000 bondholders lost everything.

However, to some Insull seemed less a criminal mastermind than a scapegoat for a system whose flaws ran far deeper.

Reforms unthinkable during the boom years followed.

The Public Utility Holding Company Act of 1935 broke up the huge holding company structures and imposed regional separation. Once exciting electricity darlings became boring regulated infrastructure: a fact reflected in the humble “Electric Company” square on the original 1935 Monopoly board.

Lessons from the 1920s for today

AI is rolling out faster than even those seeking to use it for business or government policy can sometimes manage properly.

Like electricity a century ago, a few interconnected firms are building today’s AI infrastructure.

And like a century ago, investors are piling in – though many don’t know the extent of their exposure through their superannuation funds or exchange traded funds (ETFs).

Just as in the late 1920s, today’s regulation of AI is still loose in many parts of the world – though the European Union is taking a tougher approach with its world-first AI law.

US President Donald Trump has taken the opposite approach, actively cutting “onerous regulation” of AI. Some US states have responded by taking action themselves. The courts, when consulted, are hamstrung by laws and definitions written for a different era.

Can we transition to AI being invisible infrastructure like electricity without a another bust, only then followed by reform?

If the parallels to the electrification boom remain unnoticed, the chances are slim.The Conversation

About the Author:

Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Winning a bidding war isn’t always a win, research on 14 million home sales shows

By Soon Hyeok Choi, Rochester Institute of Technology 

In today’s hot housing market, winning a bidding war can feel like a triumph. But my research shows it often comes with a catch: Homebuyers who win bidding wars tend to experience a “winner’s curse,” systematically overpaying for their new homes.

I’m a real estate economist, and my colleagues and I analyzed nearly 14 million home sales in 30 U.S. states over roughly two decades. We found that people who paid more than the asking price for their homes – a reliable sign of a bidding war – were more likely to default on their mortgages and saw significantly weaker returns.

How much weaker? On average, homebuyers who won bidding wars saw annual returns that were about 1.3 percentage points lower than those who didn’t, we found. We specifically looked at “unlevered” returns – basically, the returns you’d get if you bought the home outright with cash, without factoring in a mortgage.

Since the typical homeowner in our sample held a property for 6.3 years before selling it, this translates to about an 8.2% overpayment. Bidding-war winners were also 1.9 percentage points likelier to default.

Perhaps that loss would be worth it to someone who absolutely loves the property – but we found that homebuyers who purchase after a bidding war are also faster to resell. This suggests their overpayment is based less on enduring affection and more on bidding-war fever.

We also found that the effects of the winner’s curse – lower home appreciation and higher default rates – are stronger in places where bidding wars are more common. One example is my hometown of Rochester, New York, which has become a bidding-war hot spot in recent years.

Who bears the brunt? Lower-income, Black and Hispanic buyers are more likely to overpay in bidding wars, we found, making them more likely to suffer from the winner’s curse. This suggests that hot housing markets can worsen inequality.

Why it matters

While housing is the largest single form of wealth Americans own, past research on the winner’s curse mostly dealt with land auctions and company mergers – not the nation’s roughly 76 million owner-occupied, single-family homes. Our work is the first to show the direct evidence of the winner’s curse in residential housing markets.

This matters now because the housing market is cooling. Those who bought in the post-pandemic housing market and listed their homes in 2025 are already facing the risk of selling at a loss. Because this risk falls disproportionately on Black and Hispanic homebuyers, it could further widen the wealth gap.

By one measure, foreclosures are up 18% year over year. If the brunt of these losses falls on lower-income or otherwise vulnerable homeowners, the result could be an increase in housing insecurity and homelessness.

The good news is that the winner’s curse may be preventable. Better resources to prepare first-time homebuyers and comprehensive financial education related to mortgages and debt could help.

What still isn’t known

It’s possible more transparent bidding processes – or even formal auction systems for popular homes – could better inform prospective buyers and help them stave off the temptation of overpayment. Should the U.S. require real estate brokers or banks to caution their clients to think twice before going above the asking price? Or would that be unfair to sellers? Experimental research on these points would be useful.

Finally, our research focuses on the U.S. housing market. Whether the winner’s curse afflicts buyers in other countries remains an open question.

The Research Brief is a short take on interesting academic work.The Conversation

About the Author:

Soon Hyeok Choi, Assistant Professor of Real Estate Finance, Rochester Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A billion-dollar drug was found in Easter Island soil – what scientists and companies owe the Indigenous people they studied

By Ted Powers, University of California, Davis 

An antibiotic discovered on Easter Island in 1964 sparked a billion-dollar pharmaceutical success story. Yet the history told about this “miracle drug” has completely left out the people and politics that made its discovery possible.

Named after the island’s Indigenous name, Rapa Nui, the drug rapamycin was initially developed as an immunosuppressant to prevent organ transplant rejection and to improve the efficacy of stents to treat coronary artery disease. Its use has since expanded to treat various types of cancer, and researchers are currently exploring its potential to
treat diabetes,
neurodegenerative diseases and
even aging. Indeed, studies raising rapamycin’s promise to extend lifespan or combat age-related diseases seem to be published almost daily. A PubMed search reveals over 59,000 journal articles that mention rapamycin, making it one of the most talked-about drugs in medicine.

Connected hexagonal structures
Chemical structure of rapamycin.
Fvasconcellos/Wikimedia Commons

At the heart of rapamycin’s power lies its ability to inhibit a protein called the target of rapamycin kinase, or TOR. This protein acts as a master regulator of cell growth and metabolism. Together with other partner proteins, TOR controls how cells respond to nutrients, stress and environmental signals, thereby influencing major processes such as protein synthesis and immune function. Given its central role in these fundamental cellular activities, it is not surprising that cancer, metabolic disorders and age-related diseases are linked to the malfunction of TOR.

Despite being so ubiquitous in science and medicine, how rapamycin was discovered has remained largely unknown to the public. Many in the field are aware that scientists from the pharmaceutical company Ayerst Research Laboratories isolated the molecule from a soil sample containing the bacterium Streptomyces hydroscopicus in the mid-1970s. What is less well known is that this soil sample was collected as part of a Canadian-led mission to Rapa Nui in 1964, called the Medical Expedition to Easter Island, or METEI.

As a scientist who built my career around the effects of rapamycin on cells, I felt compelled to understand and share the human story underlying its origin. Learning about historian Jacalyn Duffin’s work on METEI completely changed how I and many of my colleagues view our own field.

Unearthing rapamycin’s complex legacy raises important questions about systemic bias in biomedical research and what pharmaceutical companies owe to the Indigenous lands from which they mine their blockbuster discoveries.

History of METEI

The Medical Expedition to Easter Island was the brainchild of a Canadian team comprised of surgeon Stanley Skoryna and bacteriologist Georges Nogrady. Their goal was to study how an isolated population adapted to environmental stress, and they believed the planned construction of an international airport on Easter Island offered a unique opportunity. They presumed that the airport would result in increased outside contact with the island’s population, resulting in changes in their health and wellness.

With funding from the World Health Organization and logistical support from the Royal Canadian Navy, METEI arrived in Rapa Nui in December 1964. Over the course of three months, the team conducted medical examinations on nearly all 1,000 island inhabitants, collecting biological samples and systematically surveying the island’s flora and fauna.

It was as part of these efforts that Nogrady gathered over 200 soil samples, one of which ended up containing the rapamycin-producing Streptomyces strain of bacteria.

Poster of the word METEI written vertically between the back of two moai heads, with the inscription '1964-1965 RAPA NUI INA KA HOA (Don't give up the ship)'
METEI logo.
Georges Nogrady, CC BY-NC-ND

It’s important to realize that the expedition’s primary objective was to study the Rapa Nui people as a sort of living laboratory. They encouraged participation through bribery by offering gifts, food and supplies, and through coercion by enlisting a long-serving Franciscan priest on the island to aid in recruitment. While the researchers’ intentions may have been honorable, it is nevertheless an example of scientific colonialism, where a team of white investigators choose to study a group of predominantly nonwhite subjects without their input, resulting in a power imbalance.

There was an inherent bias in the inception of METEI. For one, the researchers assumed the Rapa Nui had been relatively isolated from the rest of the world when there was in fact a long history of interactions with countries outside the island, beginning with reports from the early 1700s through the late 1800s.

METEI also assumed that the Rapa Nui were genetically homogeneous, ignoring the island’s complex history of migration, slavery and disease. For example, the modern population of Rapa Nui are mixed race, from both Polynesian and South American ancestors. The population also included survivors of the African slave trade who were returned to the island and brought with them diseases, including smallpox.

This miscalculation undermined one of METEI’s key research goals: to assess how genetics affect disease risk. While the team published a number of studies describing the different fauna associated with the Rapa Nui, their inability to develop a baseline is likely one reason why there was no follow-up study following the completion of the airport on Easter Island in 1967.

Giving credit where it is due

Omissions in the origin stories of rapamycin reflect common ethical blind spots in how scientific discoveries are remembered.

Georges Nogrady carried soil samples back from Rapa Nui, one of which eventually reached Ayerst Research Laboratories. There, Surendra Sehgal and his team isolated what was named rapamycin, ultimately bringing it to market in the late 1990s as the immunosuppressant Rapamune. While Sehgal’s persistence was key in keeping the project alive through corporate upheavals – going as far as to stash a culture at home – neither Nogrady nor the METEI was ever credited in his landmark publications.

Although rapamycin has generated billions of dollars in revenue, the Rapa Nui people have received no financial benefit to date. This raises questions about Indigenous rights and biopiracy, which is the commercialization of Indigenous knowledge.

Agreements like the United Nations’s 1992 Convention on Biological Diversity and the 2007 Declaration on the Rights of Indigenous Peoples aim to protect Indigenous claims to biological resources by encouraging countries to obtain consent and input from Indigenous people and provide redress for potential harms before starting projects. However, these principles were not in place during METEI’s time.

Some argue that because the bacteria that produces rapamycin has since been found in other locations, Easter Island’s soil was not uniquely essential to the drug’s discovery. Moreover, because the islanders did not use rapamycin or even know about its presence on the island, some have countered that it is not a resource that can be “stolen.”

However, the discovery of rapamycin on Rapa Nui set the foundation for all subsequent research and commercialization around the molecule, and this only happened because the people were the subjects of study. Formally recognizing and educating the public about the essential role the Rapa Nui played in the eventual discovery of rapamycin is key to compensating them for their contributions.

In recent years, the broader pharmaceutical industry has begun to recognize the importance of fair compensation for Indigenous contributions. Some companies have pledged to reinvest in communities where valuable natural products are sourced. However, for the Rapa Nui, pharmaceutical companies that have directly profited from rapamycin have not yet made such an acknowledgment.

Ultimately, METEI is a story of both scientific triumph and social ambiguities. While the discovery of rapamycin has transformed medicine, the expedition’s impact on the Rapa Nui people is more complicated. I believe issues of biomedical consent, scientific colonialism and overlooked contributions highlight the need for a more critical examination and awareness of the legacy of breakthrough scientific discoveries.The Conversation

About the Author:

Ted Powers, Professor of Molecular and Cellular Biology, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

US economy is already on the edge – a prolonged government shutdown could send it tumbling over

By John W. Diamond, Rice University 

The economic consequences of the current federal government shutdown hinge critically on how long it lasts. If it is resolved quickly, the costs will be small, but if it drags on, it could send the U.S. economy into a tailspin.

That’s because the economy is already in a precarious state, with the labor market struggling, consumers losing confidence and uncertainty mounting.

As an economist who studies public finance, I closely follow how government policies affect the economy. Let me explain how a prolonged shutdown could affect the economy – and why it could be a tipping point to recession.

Direct impacts from a government shutdown

The partial government shutdown began on Oct. 1, 2025, as Democrats and Republicans failed to reach a deal on funding some portion of the federal government. A partial shutdown means that some funding bills have been approved, entitlement spending continues since it does not rely on annual appropriations, and some workers are deemed necessary and stay on the job unpaid.

While most of the 20 shutdowns that occurred from 1976 through 2024 lasted only a few days to a week, there are signs the current one may not be resolved so quickly. The economy would definitely take a direct hit to gross domestic product from a lengthy shutdown, but it’s the indirect impacts that could be more harmful.

The most recent shutdown, which extended over the 2018-2019 winter holidays and lasted 35 days, was the longest in U.S. history. After it ended, the Congressional Budget Office estimated the partial shutdown delayed approximately US$18 billion in federal discretionary spending, which translated into an $11 billion reduction in real GDP.

Most of that lost output was made up later once the shutdown ended, the CBO noted. It estimated that the permanent losses were about $3 billion – a drop in the bucket for the $30 trillion U.S. economy.

The indirect and more lasting impacts

The full impact may depend to a large extent on the psychology of the average consumer.

Recent data suggests that consumer confidence is falling as the stagnation in the labor market becomes more clear. Business confidence has been mixed as the manufacturing index continues to indicate the sector is in contraction, while other business confidence measures indicate mixed expectations about the future.

If the shutdown drags on, the psychological effects may lead to a larger loss of confidence among consumers and businesses. Given that consumer spending accounts for 70% of economic activity, a fall in consumer confidence could signal a turning point in the economy.

These indirect effects are in addition to the direct impact of lost income for federal workers and those that operate on federal contracts, which leads to reductions in consumption and production.

The risk of significant government layoffs, beyond the usual furloughs, could deepen the economic damage. Extensive layoffs would shift the losses from a temporary delay to a more permanent loss of income and human capital, reducing aggregate demand and potentially increasing unemployment spillovers into the private sector.

In short, while shutdowns that end quickly tend to inflict modest, mostly recoverable losses, a protracted shutdown – especially one involving layoffs of a significant number of government workers – could inflict larger, lasting impacts on the economy.

US economy is already in distress

This is all occurring as the U.S. labor market is flashing warnings.

Payrolls grew by only 22,000 in August, with July and June estimates revised down by 21,000. This follows payroll growth of only 73,000 in July, with May and June estimates revised down by 258,000.
In addition, preliminary annual revisions to the employment data show the economy gained 911,000 fewer jobs in the previous year than had been reported.

Long-term unemployment is also rising, with 1.8 million people out of work for more than 27 weeks – nearly a quarter of the total number of unemployed individuals.

At the same time, AI adoption and cost-cutting could further reduce labor demand, while an aging workforce and lower immigration shrink labor supply. Fed Chair Jerome Powell refers to this as a “curious kind of balance” in the labor market.

In other words, the job market appears to have come to a screeching halt, making it difficult for recent graduates to find work. Recent graduate unemployment – that is, those who are 22 to 27 years old – is now 5.3% relative to the total unemployment rate of 4.3%.

The latest data from the ADP employment report, which measures only private company data, shows that the economy lost 32,000 jobs in September. That’s the biggest decline in 2½ years. While that’s worrying, economists like me usually wait for the official Bureau of Labor Statistics numbers to come out to confirm the accuracy of the payroll processing firm’s report.

The government data that was supposed to come out on Oct. 3 might have offered a possible counterpoint to the bad ADP news, but due to the shutdown BLS will not be releasing the report.

Problems Fed rate cuts can’t fix

This will only increase the uncertainty surrounding the health of the U.S. economy. And it adds to the uncertainty created by on-again, off-again tariffs as well as the newly imposed tariffs on lumber, furniture and other goods.

Against this backdrop, the Fed is expected to lower interest rates at least two more times this year to stimulate consumer and business spending following its September quarter-point cut. This raises the risk of reigniting inflation, but the cooling labor market is a more immediate concern for the Fed.

While lower short-term rates may help at the margin, I believe they cannot resolve the deeper challenges, such as massive government deficits and debt, tight household budgets, a housing affordability crisis and a shrinking labor force.

The question now is not will the Fed cut rates, because it likely will, but whether that cut will help, particularly if the shutdown lasts weeks or more. Monetary policy alone cannot overcome the uncertainty created by tariffs, the lack of fiscal restraint, companies focused on cutting costs by replacing people with technology, the impact of the shutdown and the fears of consumers about the future.

Lower interest rates may buy time, but they won’t solve these structural problems facing the U.S. economy.The Conversation

About the Author:

John W. Diamond, Director of the Center for Public Finance at the Baker Institute, Rice University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why do big oil companies invest in green energy?

By Michael Oxman, Georgia Institute of Technology 

Some major oil companies such as Shell and BP that once were touted as leading the way in clean energy investments are now pulling back from those projects to refocus on oil and gas production. Others, such as Exxon Mobil and Chevron, have concentrated on oil and gas but announced recent investments in carbon capture projects, as well as in lithium and graphite production for electric vehicle batteries.

National oil companies have also been investing in renewable energy. For example, Saudi Aramco has invested in clean energy while at the same time asserting that it’s unrealistic to phase out oil and gas entirely.

But the larger question is why oil companies would invest in clean energy at all, especially at a time when many federal clean energy incentives are being eliminated and climate science is being dismantled, at least in the United States.

Some answers depend on whom you ask. More traditional petroleum industry followers would urge the companies to keep focused on their core fossil fuel businesses to meet growing energy demand and corresponding near-term shareholder returns. Other shareholders and stakeholders concerned about sustainability and the climate – including an increasing number of companies with sustainability goals – would likely point out the business opportunities for clean energy to meet global needs.

Other answers depend on the particular company itself. Very small producers have different business plans than very large private and public companies. Geography and regional policies can also play a key role. And government-owned companies such as Saudi Aramco, Gazprom and the China National Petroleum Corp. control the majority of the world’s oil and gas resources with revenues that support their national economies.

Despite the relatively modest scale of investment in clean energy by oil and gas companies so far, there are several business reasons oil companies would increase their investments in clean energy over time.

The oil and gas industry has provided energy that has helped create much of modern society and technology, though those advances have also come with significant environmental and social costs. My own experience in the oil industry gave me insight into how at least some of these companies try to reconcile this tension and to make strategic portfolio decisions regarding what “green” technologies to invest in. Now the managing director and a professor of the practice at the Ray C. Anderson Center for Sustainable Business at Georgia Tech, I seek ways to eliminate the boundaries and identify mutually reinforcing innovations among business interests and environmental concerns.

Diversification and financial drivers

Just like financial advisers tell you to diversify your 401(k) investments, companies do so to weather different kinds of volatility, from commodity prices to political instability. Oil and gas markets are notoriously cyclical, so investments in clean energy can hedge against these shifts for companies and investors alike.

Clean energy can also provide opportunities for new revenue. Many customers want to buy clean energy, and oil companies want to be positioned to cash in as this transition occurs. By developing employees’ expertise and investing in emerging technologies, they can be ready for commercial opportunities in biofuels, renewable natural gas, hydrogen and other pathways that may overlap with their existing, core business competencies.

Fossil fuel companies have also found what other companies have: Clean energy can reduce costs. Some oil companies not only invest in energy efficiency for their buildings but use solar or wind to power their wells. And adding renewable energy to their activities can also lower the cost of investing in these companies.

Public pressure

All companies, including those in oil and gas, are under growing pressure to address climate change, from the public, from other companies with whom they do business and from government regulators – at least outside the U.S. For example, campaigns seeking to reduce investment in fossil fuels are increasing along with climate-related lawsuits. Government policies focused on both mitigating carbon emissions and enhancing energy independence are also making headway in some locations.

In response, many oil companies are reducing their own operational emissions and setting targets to offset or eliminate emissions from products that they sell – though many observers question the viability of these commitments. Other companies are investing in emerging technologies such as hydrogen and methods to remove carbon dioxide from the atmosphere

Some companies, such as BP and Equinor, have previously even gone so far as rebranding themselves and acquiring clean energy businesses. But those efforts have also been criticized as “greenwashing,” taking actions for public relations value rather than real results.

How far can this go?

It is even possible for a fossil fuel company to reinvent itself as a clean energy operation. Denmark’s Orsted – formerly known as Danish Oil and Natural Gas – transitioned from fossil fuels to become a global leader in offshore wind. The company, whose majority owner is the Danish government, made the shift, however, with the help of significant public and political support.

But most large oil companies aren’t likely to completely reinvent themselves anytime soon. Making that change requires leadership, investor pressure, customer demand and shifts in government policy, such as putting a price or tax on carbon emissions.

To show students in my sustainability classes how companies’ choices affect both the environment and the industry as a whole, I use the MIT Fishbanks simulation. Students run fictional fishing companies competing for profit. Even when they know the fish population is finite, they overfish, leading to the collapse of the fishery and its businesses. Short-term profits cause long-term disaster for the fishery and the businesses that depend on it.

The metaphor for oil and gas is clear: As fossil fuels continue to be extracted and burned, they release planet-warming emissions, harming the planet as a whole. They also pose substantial business risks to the oil and gas industry itself.

Yet students in a recent class showed me that a more collective way of thinking may be possible. Teams voluntarily reduced their fishing levels to preserve long-term business and environmental sustainability, and they even cooperated with their competitors. They did so without in-game regulatory threats, shareholder or customer complaints, or lawsuits.

Their shared understanding that the future of their own fishing companies was at stake makes me hopeful that this type of leadership may take hold in real companies and the energy system as a whole. But the question remains about how fast that change can happen, amid the accelerating global demand for more energy along with the increasing urgency and severity of climate change and its effects.The Conversation

About the Author:

Michael Oxman, Professor of the Practice of Sustainable Business, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why aren’t companies speeding up investment? A new theory offers an answer to an economic paradox

By David Ikenberry, University of Colorado Boulder 

For years, I’ve puzzled over a question that seems to defy common sense: If stock markets are hitting records and tech innovation seems endless, why aren’t companies pouring money back into new projects?

Yes, they’re still investing – but the pace of business spending is slower than you’d expect, especially outside of AI.

And if you’ve noticed headlines about sluggish business spending even as corporate profits soar, you’re not alone. It’s a puzzle that’s confounded economists, policymakers and investors for decades. Back in 1975, U.S. public companies reinvested an average of 25 cents for every dollar on their balance sheets. Today, that figure is closer to 12 cents.

In other words, corporate America is flush with cash, but it’s surprisingly stingy about reinvesting in its own future. What happened?

I’m an economist, and my colleague Gustavo Grullon and I recently published a study in the Journal of Finance that turns the field’s conventional wisdom on its head. Our research suggests the issue isn’t cautious executives or jittery markets – it’s about how economists have historically measured companies’ incentives to invest in the first place.

Asking the wrong Q

For decades, economists have relied on a simple but appealing ratio – Tobin’s Q, named after the famous economist James Tobin – to gauge whether companies should ramp up investment.

They calculate this by dividing a company’s market value – what it would take to purchase the firm outright with cash – by its replacement value, or how much it would cost to rebuild the company from scratch. The result is called “Q.” The higher the Q, the theory goes, the more incentive executives have to invest.

But reality hasn’t conformed to fit the theory. Over the past half-century, Tobin’s Q has gone up, yet investment rates have gone down sharply.

Why the disconnect? Our research points to one key culprit: excess capacity. Many U.S. companies already have more factories, machines or service capability than they can use. By not correcting for this issue, the traditional Tobin’s Q will overstate the incentive that companies have to grow.

To see this, consider a commercial real estate company that owns a portfolio of office buildings. In recent years, with the rise of e-commerce and remote work, many of their properties have been running well below capacity. Now suppose a few new tenants start paying rent and begin absorbing a portion of that empty space. Stock prices will rise in response to seeing these new cash flows, which in turn will lead Q to rise.

Traditionally, this increase in Q would suggest that it’s a good time to invest in new buildings – but the reality is quite different with idle capacity still in the system. Why pour money into building another office tower if existing ones still have empty floors?

This key idea is that what matters isn’t the average value of all assets – it’s the marginal value of adding one more dollar of investment. And because capacity utilization has been steadily eroding over the past half-century, many firms see little reason to invest.

That last point may come as a surprise, but the U.S. economy, with all its factories and offices, isn’t nearly as abuzz with activity as it was after, say, World War II. Today, many sectors operate well below full throttle. This growing slack in the system over time helps explain why companies have pulled back on their rate of investment, even as profits and market values climb.

Why has capacity utilization fallen so much over the past half-century? It’s not entirely clear, but what economists call “structural economic rigidities” – things such as regulatory hurdles, labor market frictions or shifts in cost structure – seem to be part of the answer. These factors can drag businesses into a state of chronic underuse, especially after recessions.

Why it matters

This isn’t just an academic debate. The implications are profound, whether you closely follow Wall Street or just enjoy armchair economic policy debates. For one thing, this dynamic might help explain why tax cuts haven’t spurred investment the way supporters have hoped.

Take the 2017 Tax Cuts and Jobs Act, which slashed the top corporate tax rate from 35% to 21% and introduced full expensing for equipment investments. Supporters promised a wave of new investment.

But when my colleague and I looked at the numbers, we found the opposite. In the four years before the tax cuts, publicly traded U.S. firms had an aggregate investment rate, including intangibles, of 13.9%. In the four years after the tax cut, the average investment rate fell to 12.4% – in other words, no evidence of a bump.

Where did those liberated cash flows go? Instead of plowing this newfound cash after the tax cuts into new projects, many companies funneled it into stock buybacks and dividends.

In retrospect, this makes sense. If a company has excess capacity, the incentive to invest should be more muted, even if new machines are suddenly cheaper thanks to tax breaks. If the demand isn’t there, why buy them?

Even with the most generous tax incentives, the core challenge remains: You can’t force-feed investment into an economy already swimming in excess capacity. If companies don’t see real, scalable demand, tax breaks alone aren’t likely to unlock a new era of business spending.

That doesn’t mean tax policy doesn’t matter – it does, especially for smaller firms with real growth prospects. But for the large, well-established firms that make up the lion’s share of the economy, the bigger challenge is demand. Rather than trying to stimulate even more investment, policymakers should prioritize understanding why demand is sagging relative to supply and reducing economic rigidities where they can. That way, the capacity generated by new investment has somewhere useful to go.The Conversation

About the Author:

David Ikenberry, Professor of Finance, Leeds School of Business, University of Colorado Boulder

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Scams and frauds: Here are the tactics criminals use on you in the age of AI and cryptocurrencies

By Rahul Telang, Carnegie Mellon University 

Scams are nothing new – fraud has existed as long as human greed. What changes are the tools.

Scammers thrive on exploiting vulnerable, uninformed users, and they adapt to whatever technologies or trends dominate the moment. In 2025, that means AI, cryptocurrencies and stolen personal data are their weapons of choice.

And, as always, the duty, fear and hope of their targets provide openings. Today, duty often means following instructions from bosses or co-workers, who scammers can impersonate. Fear is that a loved one, who scammers also can impersonate, is in danger. And hope is often for an investment scheme or job opportunity to pay off.

AI-powered scams and deepfakes

Artificial intelligence is no longer niche – it’s cheap, accessible and effective. While businesses use AI for advertising and customer support, scammers exploit the same tools to mimic reality, with disturbing precision.

Deepfake scams use high-tech tools and old-fashioned emotional manipulation.

Criminals are using AI-generated audio or video to impersonate CEOs, managers or even family members in distress. Employees have been tricked into transferring money or leaking sensitive data. Over 105,000 such deepfake attacks were recorded in the U.S. in 2024, costing more than US$200 million in the first quarter of 2025 alone. Victims often cannot distinguish synthetic voices or faces from real ones.

Fraudsters are also using emotional manipulation. The scammers make phone calls or send convincing AI-written texts posing as relatives or friends in distress. Elderly victims in particular fall prey when they believe a grandchild or other family member is in urgent trouble. The Federal Trade Commission has outlined how scammers use fake emergencies to pose as relatives.

Cryptocurrency scams

Crypto remains the Wild West of finance — fast, unregulated and ripe for exploitation.

Pump-and-dump scammers artificially inflate the price of a cryptocurrency through hype on social media to lure investors with promises of huge returns – the pump – and then sell off their holdings – the dump – leaving victims with worthless tokens.

Pig butchering is a hybrid of romance scams and crypto fraud. Scammers build trust over weeks or months before persuading victims to invest in fake crypto platforms. Once the scammers have extracted enough money from the victim, they vanish.

Pig-butchering scams lure people into fake online relationships, often with devastating consequences.

Scammers also use cryptocurrencies as a means of extracting money from people in impersonation scams and other forms of fraud. For example, scammers direct victims to bitcoin ATMs to deposit large sums of cash and convert it to the untraceable cryptocurrency as payment for fictitious fines.

Phishing, smishing, tech support and jobs

Old scams don’t die; they evolve.

Phishing and smishing have been around for years. Victims are tricked into clicking links in emails or text messages, leading to malware downloads, credential theft or ransomware attacks. AI has made these lures eerily realistic, mimicking corporate tone, grammar and even video content.

Tech support scams often start with pop-ups on computer screens that warn of viruses or identity theft, urging users to call a number. Sometimes they begin with a direct cold call to the victim. Once the victim is on a call with the fake tech support, the scammers convince victims to grant remote access to their supposedly compromised computers. Once inside, scammers install malware, steal data, demand payment or all three.

Fake websites and listings are another current type of scam. Fraudulent sites impersonating universities or ticket sellers trick victims into paying for fake admissions, concerts or goods.

One example is when a website for “Southeastern Michigan University” came online and started offering details about admission. There is no such university. Eastern Michigan University filed a complaint that Southeastern Michigan University was copying its website and defrauding unsuspecting victims.

The rise of remote and gig work has opened new fraud avenues.

Victims are offered fake jobs with promises of high pay and flexible hours. In reality, scammers extract “placement fees” or harvest sensitive personal data such as Social Security numbers and bank details, which are later used for identity theft.

How you can protect yourself

Technology has changed, but the basic principles remain the same: Never click on suspicious links or download attachments from unknown senders, and enter personal information only if you are sure that the website is legitimate. Avoid using third-party apps or links. Legitimate businesses have apps or real websites of their own.

Enable two-factor authentication wherever possible. It provides security against stolen passwords. Keep software updated to patch security holes. Most software allows for automatic update or warns about applying a patch.

Remember that a legitimate business will never ask for personal information or a money transfer. Such requests are a red flag.

Relationships are a trickier matter. The state of California provides details on how people can avoid being victims of pig butchering.

Technology has supercharged age-old fraud. AI makes deception virtually indistinguishable from reality, crypto enables anonymous theft, and the remote-work era expands opportunities to trick people. The constant: Scammers prey on trust, urgency and ignorance. Awareness and skepticism remain your best defense.The Conversation

About the Author:

Rahul Telang, Professor of Information Systems, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The discovery of a gravitational wave 10 years ago shook astrophysics – these ripples in spacetime continue to reveal dark objects in the cosmos

By Chad Hanna, Penn State 

Scientists first detected ripples in space known as gravitational waves from the merger of two black holes in September 2015. This discovery marked the culmination of a 100-year quest to prove one of Einstein’s predictions.

Two years after this watershed moment in physics came a second late-summer breakthrough in August 2017: the first detection of gravitational waves accompanied by electromagnetic waves from the merger of two neutron stars.

Gravitational waves are exciting to scientists because they provide a completely new view of the universe. Conventional astronomy relies on electromagnetic waves – like light – but gravitational waves are an independent messenger that can emanate from objects that don’t emit light. Gravitational wave detection has unlocked the universe’s dark side, giving scientists access to phenomena never observed before.

As a gravitational wave physicist with over 20 years of research experience in the LIGO Scientific Collaboration, I have seen firsthand how these discoveries have transformed scientists’ knowledge of the universe.

This summer, in 2025, scientists with the LIGO, Virgo and KAGRA collaboration also marked a new milestone. After a long hiatus to upgrade its equipment, this collaboration just released an updated list of gravitational wave discoveries. The discoveries on this list provide researchers with an unprecedented view of the universe featuring, among other things, the clearest gravitational wave detection yet.

A map showing five yellow points indicating operational gravitational wave observatories: two in the US, two in Europe and one in Japan, and one orange point in India indicating a planned observatory.
The more operational gravitational-wave observatories there are around the globe, the easier it is to pin down the locations and sources of gravitational waves coming from space.
Caltech/MIT/LIGO Lab

What are gravitational waves?

Albert Einstein first predicted the existence of gravitational waves in 1916. According to Einstein’s theory of gravity, known as general relativity, massive, dense celestial objects bend space and time.

When these massive objects, like black holes and neutron stars – the end product of a supernova – orbit around each other, they form a binary system. The motion from this system dynamically stretches and squeezes the space around these objects, sending gravitational waves across the universe. These waves ever so slightly change the distance between other objects in the universe as they pass.

Detecting gravitational waves requires measuring distances very carefully. The LIGO, Virgo and KAGRA collaboration operates four gravitational wave observatories: two LIGO observatories in the U.S., the Virgo observatory in Italy and the KAGRA observatory in Japan.

Each detector has L-shaped arms that span over two miles. Each arm contains a cavity full of reflected laser light that precisely measures the distance between two mirrors.

As a gravitational wave passes, it changes the distance between the mirrors by 10-18 meters — just 0.1% of the diameter of a proton. Astronomers can measure how the mirrors oscillate to track the orbit of black holes.

These tiny changes in distance encode a tremendous amount of information about their source. They can tell us the masses of each black hole or neutron star, their location and whether they are spinning on their own axis.

An L-shaped facility with two long arms extending out from a central building.
The LIGO detector in Hanford, Wash., uses lasers to measure the minuscule stretching of space caused by a gravitational wave.
LIGO Laboratory

A neutron star-black hole merger

As mentioned previously, the LIGO, Virgo and KAGRA collaboration recently reported 128 new binary mergers from data taken between May 24, 2023, and Jan. 16, 2024 – which more than doubles the previous count.

Among these new discoveries is a neutron star–black hole merger. This merger consists of a relatively light black hole with mass between 2.5 and 4.5 times the mass of our Sun paired with a neutron star that is 1.4 times the mass of our Sun.

In this kind of system, scientists theorize that the black hole tears the neutron star apart before swallowing it, which releases electromagnetic waves. Sadly, the collaboration didn’t manage to detect any such electromagnetic waves for this particular system.

Detecting an electromagnetic counterpart to a black hole tearing apart a neutron star is among the holy grails of astronomy and astrophysics. These electromagnetic waves will provide the rich datasets required for understanding both the extreme conditions present in matter, and extreme gravity. Scientists hope for better fortune the next time the detectors spot such a system.

A massive binary and clear gravitational waves

In July 2025, the LIGO, Virgo and KAGRA collaboration also announced they’d found the most massive binary black hole merger ever detected. The combined mass of this system is more than 200 times the mass of our Sun. And, one of the two black holes in this system likely has a mass that scientists previously assumed could not be produced from the collapse of a single star.

When two astrophysical objects – like black holes – merge, they send out gravitational waves.

The most recent discovery announced by the LIGO, Virgo and KAGRA collaboration, in September 2025, is the clearest gravitational wave observation to date. This event is a near clone of the first gravitational wave observation from 10 years ago, but because LIGO’s detectors have improved over the last decade, it stands out above the noise three times as much as the first discovery.

Because the observed gravitational wave signal is so clear, scientists could confirm that the final black hole that formed from the merger emitted gravitational waves exactly as it should according to general relativity.

They also showed that the surface area of the final black hole was greater than the surface area of the initial black holes combined, which implies that the merger increased the entropy, according to foundational work from Stephen Hawking and Jacob Bekenstein. Entropy measures how disordered a system is. All physical interactions are expected to increase the disorder of the universe, according to thermodynamics. This recent discovery showed that black holes obey their own laws similar to thermodynamics.

The beginning of a longer legacy

The LIGO, Virgo and KAGRA collaboration’s fourth observing run is ongoing and will last through November. My colleagues and I anticipate more than 100 additional discoveries within the coming year.

New observations starting in 2028 may bring the tally of binary mergers to as many as 1,000 by around 2030, if the collaboration keeps its funding.

Gravitational wave observation is still in its infancy. A proposed upgrade to LIGO called A# may increase the gravitational wave detection rate by another factor of 10. Proposed new observatories called Cosmic Explorer and the Einstein Telescope that may be built in 10 to 20 years would increase the rate of gravitational wave detection by 1,000, relative to the current rate, by further reducing noise in the detector.The Conversation

About the Author:

Chad Hanna, Professor of Physics, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.