But in these advertising campaigns on TV, billboards on New York streets and on social media, the companies are signalling something larger.
Even Apple’s new series release, Pluribus, includes the phrase “Made by Humans” in the closing credits.
Other brands including H&M and Guess have faced a backlash for using AI brand ambassadors instead of humans.
These gestures suggest we have reached a cultural moment in the evolution of this technology, where people are unsure what creativity means when machines can now produce much of what we see, hear and perhaps even be moved by.
This feels like efficiency – for executives
At a surface level, AI offers efficiencies such as faster production, cheaper visuals, instant personalisation, and automated decisions. Government and business have rushed toward it, drawn by promises of productivity and innovation. And there is no doubt that this promise is deeply seductive. Indeed, efficiency is what AI excels at.
In the context of marketing and advertising, this “promise”, at least at face value, seems to translate to smaller marketing budgets, better targeting, automated decisions (including by chatbots) and rapid deployment of ad campaigns.
For executives, this is exciting and feels like real progress, with cheaper, faster and more measurable brand campaigns.
But advertising has never really just been about efficiency. It has always relied on a degree of emotional truth and creative mystery. That psychological anchor – a belief that human intention sits behind what we are looking at – turns out to matter more than we like to admit.
Turns out, people care about authenticity
Indeed, people often value objects more when they believe those objects carry traces of a person’s intention or history. This is the case even when those images don’t differ in any material way from a computer-generated image.
To some degree, this signals consumers are sensitive to the presence of a human creator, because when visually compelling computer-generated images are labelled as machine-made, people tend to rate them less favourably.
Indeed, when the same paintings are randomly labelled as either “human created” or “AI created”, people consistently judge the works they believe to be “human created” as more beautiful, meaningful and profound.
However, there is an important caveat here. These studies rely on people being told who made the work. The effect is a result of attribution, not perception. And so this limitation points towards a deeper problem.
If evaluations change purely because people believe a work was machine made, the response is not about quality, it is about meaning. It reflects a belief that creativity is tied to intention, effort and expression. These are qualities an algorithm doesn’t possess, even when it creates something visually persuasive. In other words, the label carries emotional weight.
There are, of course, obvious examples of when AI goes comedically wrong. In early 2024, the Queensland Symphony Orchestra promoted its brand using a very strange AI-generated image most people instantly recognised as unnatural. Part of the backlash, along with the unsettling weirdness of the image, was the perception an arts organisation was betraying human creativity.
But as AI systems improve, people often struggle to distinguish synthetic from real. Indeed, AI generated faces are judged by many to be just as real, and sometimes more trustworthy, than actual photographs.
Research shows people overestimate their ability to detect deepfakes, and often mistake deepfake videos as authentic.
Although we can see emerging patterns here, the empirical research in this area is being outpaced by AI’s evolving capabilities. So we are often trying to understand psychological responses to a technology that has already evolved since the research took place.
As AI becomes more sophisticated, the boundary between human and machine-made creativity will become harder to perceive. Commerce may not be particularly troubled by this. If the output performs well, the question of origin become secondary.
Why we value creativity
But creative work has never been only about generating content. It is a way for people to express emotion, experience, memory, dissent and interpretation.
And perhaps this is why the rise of “Made by Humans” actually matters. Marketers are not simply selling provenance, they are responding to a deeper cultural anxiety about authorship in a moment when the boundaries of creativity are becoming harder to perceive.
Indeed, one could argue there is an ironic tension here. Marketing is one of the professions most exposed to being superseded by the same technology marketers are now trying to differentiate themselves from.
So whether these human-made claims are a commercial tactic or a sincere defence of creative intention, there is significantly more at stake than just another way to drive sales.
About the Author:
Paul Harrison, Director, Master of Business Administration Program (MBA); Co-Director, Better Consumption Lab, Deakin University
In late October 2025, as much as US$2 billion vanished from a digital marketplace. This wasn’t a hack or a bubble bursting. It happened because one company, Valve, changed the rules for its video game Counter-Strike 2, a popular first-person shooter with a global player base of nearly 30 million monthly users.
For years, its players have bought, sold and traded digital cosmetic items, known as “skins.” Some rare items, particularly knives and gloves, commanded high prices in real-world money – up to $1.5 million – leading some gamers to treat the market like an investment portfolio. As a result, many investment-style analytics websites charge monthly fees for financial insight, trends and transaction data from this digital marketplace.
In one fell swoop, Valve unilaterally changed the game. It expanded the “trade up contract,” allowing players to exchange – or “trade up” – a number of their common assets into knives or gloves.
By flipping this switch, Valve instantly upended digital scarcity. The market was flooded with new supply, and the value of existing high-end items collapsed. Prices plummeted, initially erasing half the market’s total value, which exceeded $6 billion before the recent crash. Although a partial recovery brought the net loss to roughly 25%, significant volatility continues, leaving investors unsure whether the bottom has truly fallen out.
Many of those who saw their digital fortunes evaporate immediately wondered whether there was anything they could do to get their money back. Speaking as a law professor and a gamer myself, the answer isn’t what they want to hear: no. In fact, the existing legal structure largely protects Valve’s ability to engage in this sort of digital market manipulation. Players and investors were simply out of luck.
The Counter-Strike 2 crash reveals a troubling reality that extends far beyond video games: Corporations have built exchange-scale investment markets governed primarily by private terms-of-service agreements, rather than the robust set of public regulations that oversee traditional financial and consumer markets. These digital economies occupy a legal blind spot, lacking the fundamental guardrails of property rights, meaningful consumer protection or even securities regulation.
Buyer’s guides like this one have cropped up on YouTube.
Your digital ‘property’ isn’t really yours
If you spend real money on a digital item, it may feel like you should own it. Legally, you don’t.
The digital economy is built on a crucial distinction between ownership and licensing. When users sign up for Steam, Valve’s platform, they agree to the Steam subscriber agreement. Buried in that contract is a critical piece of legalese stating that all digital assets and services provided by Valve, including the Counter-Strike 2 skins, are merely “licensed, not sold.” The license granted to users “confers no title or ownership” at all. This isn’t meaningless corporate jargon; it’s a legal standard routinely affirmed by U.S. courts.
The legal implication is clear: Because players only license their skins, they have no property rights over them. When Valve changed the game’s mechanics in a way that collapsed the items’ market value, it didn’t steal, damage or destroy anyone’s “property.” In the eyes of the law, Valve simply altered the conditions of a license, something that its terms-of-service agreement allows it to do unilaterally, at any time, for any reason.
Consumer protection laws don’t apply
While the Counter-Strike 2 crash may seem like a violation of consumer rights, current laws are ill-equipped to handle this type of corporate behavior.
Lawmakers have begun addressing concerns about digital goods, primarily focusing on instances where purchased movies or games disappear entirely from user libraries. For example, California recently enacted AB 2426. This law requires transparency, prohibiting terms like “buy” or “purchase” unless the consumer confirms that they understand they will receive only a revocable license.
As commendable as this law is, it protects only against confusion and loss of access, not loss of market value when platforms rebalance virtual economies. Valve can comply with consumer transparency laws and still adjust the supply of digital items, rendering them valueless overnight. Ultimately, current consumer protection laws are designed to ensure users know what they are licensing. They do not, however, create ownership interests or protect the speculative value of those digital items.
Game items are treated like unregulated stocks
Perhaps the most significant legal vacuum is the absence of financial regulation. The Counter-Strike 2 economy, a multibillion-dollar ecosystem with dedicated investors and third-party cash markets, looks and behaves like a traditional financial market. Yet, it remains outside the purview of any financial regulator, such as the U.S. Securities and Exchange Commission.
Under U.S. law, the primary standard for determining whether an asset should be governed as a security is the Howey test. According to this Supreme Court precedent, an asset is a security if it meets four criteria. Securities involve an “investment of money” in a “common enterprise” with a reasonable expectation of “profits” derived from the “efforts of others.”
Counter-Strike 2 skins arguably meet all of these criteria. Participants invest real money in a common enterprise – Valve’s platform – with an expectation of profit. Crucially, that profit depends on the “efforts of others.” The SEC notes this prong is met when a promoter provides “essential managerial efforts” that affect the enterprise’s success. Valve controls the game’s development, manages the platform and – as the recent update proves – dictates item supply and scarcity.
If a publicly traded company unilaterally changed its rules in a way that predictably tanked the price of its own shares, regulators would immediately investigate for market manipulation. So how can Valve get away with this? Three things cut against the skins’ status as securities.
First is their “consumptive intent” – skins are primarily game cosmetics. Second, there’s no way to convert the skins into dollars within Valve’s own ecosystem. In other words, third-party markets allow users to cash out, but these markets operate outside Valve’s own immediate control. And finally, the Howey test generally governs assets, such as stocks and bonds, that grant investors enforceable rights. Valve’s licensing scheme attempts to circumvent this by ensuring players hold nothing but a revocable license.
In my view, the $2 billion crash is a wake-up call. As digital economies grow in financial significance, society must decide: Will these markets continue to be governed solely by private corporate contracts? Or will they require integration into more robust legal frameworks, such as securities regulation, consumer protection and property law?
The world is losing vast swaths of forests to agriculture, logging, mining and fires every year — more than 20 million acres in 2024 alone, roughly the size of South Carolina.
Over the years, countries have committed to reverse that forest loss, and many organizations, governments, and Indigenous and local communities have worked hard to advance those goals. Many of their efforts have been at least partly successful.
For instance, Brazil credits stronger law enforcement and better monitoring at the state and national levels for helping reduce illegal land clearing and deforestation in the Amazon. The deforestation rate there fell by 31% from 2023 to 2024.
Funding from governments and the private sector is helping communities restore land that has already been cleared. Other programs protect forests through payments for ecosystem services, such as paying landowners to maintain existing forests and the benefits those forests provide. These programs provide money to a government, community or landowner based on verified results that the forest is being protected over time.
And yet, despite these and many other efforts, the world is falling short on its commitments to protect tropical forests. The planet lost 6.7 million hectares of tropical forest, nearly 26,000 square miles (67,000 square kilometers), in 2024 alone.
Law enforcement is not enough by itself. When enforcement is weakened, as happened in Brazil from 2019 to 2023, illegal land clearing and forest loss ramp back up. Programs that pay landowners to keep forests standing also have drawbacks. Research has shown they might only temporarily reduce deforestation if they don’t continue payments long term.
The problem is that deforestation is often driven by economic factors such as global demand for crops, cattle and minerals such as gold and copper. This demand provides significant incentives to farmers, companies and governments to continue clearing forests.
Simply put, the scale of the deforestation problem is massive, and new efforts are needed to truly reverse the economic drivers or causes of deforestation.
In order to increase the amount of funding to protect tropical forests, Brazil launched a global program on Nov. 6, 2025, ahead of the annual U.N. climate conference, called the Tropical Forest Forever Facility, or TFFF. It is an innovative approach that combines money from countries and private investors to compensate countries for preserving tropical forests.
As an environmental law scholar who works in climate policy development, including to protect tropical forests, I believe the design of the program has real promise. But I also see room to improve it by bringing in states and provinces to ensure money reaches programs closer to the ground that will pay off for the environment.
What makes the Tropical Forest Forever Facility different?
The Tropical Forest Forever Facility seeks to tackle the deforestation problem by focusing on the issue of scale – both geographic and economic.
First, it will measure results across entire countries rather than at the smaller landowner level. That can help reduce deforestation more broadly within countries and influence national policies that currently contribute to deforestation. It focuses on the amount of forest area protected rather than estimating the amount of carbon in the trees.
Second, it seeks to raise billions of dollars. This is important to counter the economic incentives for clearing forests for agriculture, livestock and timber.
The mechanics of raising these funds is intriguing – Brazil is seeking an initial $25 billion from national governments and foundations, and then another $100 billion from investors. These funds would be invested in securities – think the stock and bond markets – and returns on those investments, after a percentage is paid to investors, would be paid to countries that demonstrate successful forest protection.
These countries would be expected to invest their results-based payments into forest conservation initiatives, in particular to support communities doing the protection work on the ground, including ensuring that at least 20% directly supports local communities and Indigenous peoples whose territories often have the lowest rates of deforestation thanks to their efforts.
Where different types of deforestation are most prominent. Shifting agriculture, shown in yellow, reflects land temporarily cleared for agriculture and later allowed to regrow. Project Drawdown, data from Curtis et al., 2018, CC BY-ND
Finally, the Tropical Forest Forever Facility recognizes that, like past efforts, it is not a silver bullet. It is being designed to complement other programs and policies, including carbon market approaches that raise money for forest protection by selling carbon credits to governments and companies that need to lower their emissions.
What has been the reaction so far?
The new forest investment fund is attracting interest because of its size, ambition and design.
Brazil and Indonesia were the first to contribute, committing $1 billion each. Norway added $3 billion on Nov. 7, and several other countries also committed to support it.
The Tropical Forest Forever Facility still has a long way to go toward its $125 billion goal, but it will likely draw additional commitments during the U.N. climate conference, COP30, being held Nov. 10-21, 2025, in Brazil. World leaders and negotiators are meeting in the Amazon for the first time.
How can the Tropical Forest Forever Facility be improved?
The Tropical Forest Forever Facility’s design has drawn some criticism, both for how the money is raised and for routing the money through national governments. While the fund’s design could draw more investors, if its investments don’t have strong returns in a given year, the fund might not receive any money, likely leaving a gap in expected payments for the programs and communities protecting forests.
Many existing international funding programs also provide money solely to national governments, as the Amazon Fund and the U.N.’s Global Environment Facility do. However, a lot of the actual work to reduce deforestation, from policy innovation to implementation and enforcement, takes place at the state and provincial levels.
One way to improve the Tropical Forest Forever Facility’s implementation would be to include state- and provincial-level governments in decisions about how payments will be used and ensure those funds make it to the people taking action in their territories.
The task force developed a Blueprint for a New Forest Economy, which can help connect efforts such as the Tropical Forest Forever Facility to state- and community-level forest protection initiatives so funding reaches projects that can pay off for forest protection.
The Tropical Forest Forever Facility is an example of the type of innovative mechanism that could accelerate action globally. But to truly succeed, it will need to be coordinated with state and provincial governments, communities and others doing the work on the ground. The world’s forests – and people – depend on it.
The world’s most valuable publicly listed company, US microchip maker Nvidia, has reported record $US57 billion revenue in the third quarter of 2025, beating Wall Street estimates. The chipmaker said revenue will rise again to $US65 billion in the last part of the year.
Just weeks ago, Nvidia became the first company valued at more than $US5 trillion – surpassing others in the “magnificent seven” tech companies: Alphabet (owner of Google), Amazon, Apple, Tesla, Meta (owner of Facebook, Instagram and Whatsapp) and Microsoft.
Nvidia stocks were up more than 5% to $US196 in after-hours trading immediately following the results.
Over the past week, news broke that tech billionaire Peter Thiel’s hedge fund had sold its entire stake in Nvidia in the third quarter of 2025 – more than half a million shares, worth around $US100 million.
But in that same quarter, an even more famous billionaire’s firm made a surprise bet on Alphabet, signalling confidence in Google’s ability to profit from the AI era.
Based in Omaha, Nebraska in the United States, Berkshire Hathaway is a global investing giant, led for decades by 95-year-old veteran Warren Buffett.
Berkshire Hathaway’s latest quarterly filing reveals the company accumulated a US$4.3 billion stake in Alphabet over the September quarter.
The size of the investment suggests a strategic decision – especially as the same filing showed Berkshire had significantly sold down its massive Apple position. (Apple remains Berkshire’s single largest stock holding, currently worth about US$64 billion.)
Buffett is about to step down as Berkshire’s chief executive. Analysts are speculating this investment may offer a pre-retirement clue about where durable profits in the digital economy could come from.
Buffett’s record of picking winners with ‘moats’
Buffett has picked many winners over the decades, from American Express to Coca Cola.
Yet he has long expressed scepticism toward technology businesses. He also has form in getting big tech bets wrong, most notably his underwhelming investment in IBM a decade ago.
But that framing misunderstands Buffett’s investment philosophy and the nature of Google’s business.
Buffett is not late to AI. He is doing what he’s always done: betting on a company he believes has an “economic moat”: a built-in advantage that keeps competitors out.
His firm’s latest move signals they see Google’s moat as widening in the generative-AI era.
Two alligators in Google’s moat
Google won the search engine wars of the late 1990s because it excelled in two key areas: reducing search cost and navigating the law.
Over the years, those advantages have acted like alligators in Google’s moat, keeping competitors at bay.
Google understood earlier and better than anyone that reducing search cost – the time and effort to find reliable information – was the internet’s core economic opportunity.
Google founders Sergey Brin and Larry Page in 2008, ten years after launching the company. Joi Ito/Wikimedia Commons, CC BY
Company founders Sergey Brin and Larry Page started with a revolutionary search algorithm. But the real innovation was the business model that followed: giving away search for free, then auctioning off highly targeted advertising beside the results.
Berkshire Hathaway likely sees Google’s track record in these areas as an advantage rivals cannot easily copy.
What if the AI bubble bursts?
Perhaps the genius of Berkshire’s investment is recognising that if the AI bubble bursts, it could bring down some of the “magnificent seven” tech leaders – but perhaps not its most durable members.
Consumer-facing giants like Google and Apple would probably weather an AI crash well. Google’s core advertising business sailed through the global financial crisis of 2008, the COVID crash, and the inflationary bear market of 2022.
By contrast, newer “megacaps” like Nvidia may struggle in a downturn.
Plenty could still go wrong
There’s no guarantee Google will be able to capitalise on the new economics of AI, especially with so many ongoing intellectual property and regulatory risks.
Google’s brand, like Buffett, could just get old. Younger people are using search engines less, with more using AI or social media to get their answers.
But with its rivers of online advertising gold, experience back to the dawn of the commercial internet, and capacity to use its platforms to nurture new habits among its vast user base, Alphabet is far from a bad bet.
Disclaimer: This article provides general information only and does not take into account your personal objectives, financial situation, or needs. It is not intended as financial advice. All investments carry risk.
The two most recent major bubble episodes were the dot-com bubble in the United States (1996-2000) and the housing bubbles that emerged around 2006 in different countries. Both ended in recession – the former relatively mild, and the latter catastrophically bad. Recent, dizzying increases in the stock prices of AI-related companies have now got many investors asking “are we witnessing another asset price bubble?”
It is important to put the current AI boom in context. The stock price of Nvidia – which manufactures many of the computer chips that power the AI industry – has multiplied by 13 since the start of 2023. Stocks in other AI-related companies like Microsoft and Google’s parent company Alphabet have multiplied by 2.1 and 3.2, respectively. In comparison, the S&P 500, which tracks the stocks of the most important US firms, has multiplied by just 1.8 in the same period.
It is important to emphasise that these AI-related companies are included in the S&P 500, making the difference with non-AI companies even larger. Accordingly, it seems that there is an AI-bubble – but it won’t necessarily end in a repeat of 2008.
How a bubble forms
The price of any stock can be broken down into two components: its fundamental value, and the inflated bubble value. If the stock’s price is above its fundamental value, there is a bubble in its price.
The fundamental value of an asset is the discounted sum of its expected future dividends. The key word here is “expected”. Given that no one, not even ChatGPT, can predict the future, the fundamental value depends on the subjective expectations of each investor. They might be optimistic or pessimistic; in time, some will be proven right, and others wrong.
Optimistic investors expect that AI will change the world, and that the owners of this technology will make (almost) infinite profits. Not knowing which company will emerge victorious, they invest in all AI-related companies.
In contrast, pessimistic investors think that AI is just sophisticated software, as opposed to truly groundbreaking technology, and they will see bubbles everywhere.
A third possibility is the more sophisticated investors. These are people that think – or know – that there is a bubble, but keep investing in the hope of being able to ride the wave and get off before it is too late.
The last of these possibilities is reminiscent of the infamous quote from Citigroup CEO Chuck Prince before the 2008 housing bubble burst: “as long as the music is playing, you’ve got to get up and dance”.
As an economist, I can say safely that it is impossible for all AI-related companies to end up dominating the market. This means, beyond a doubt, that the value of at least some AI-related stocks have a large bubble component.
A shortage of assets
Asset price bubbles can be the market’s natural response to a shortage of assets. In a moment when the demand for assets exceeds the supply (especially for safe assets like government bonds), there is room for other, newer assets to emerge.
This pattern explains the emergence of, for example, the 1990s dot-com bubble and the subsequent 2000s housing bubble. In that context, the growing role of China in financial markets increased the demand for assets in the West – the money first went to dot-com companies in the 1990s and, when that bubble burst, to fund housing via mortgage-backed securities.
In today’s context, a combination of factors have paved the way for the AI bubble: excitement around new technology, low interest rates (another sign of shortage of assets) and huge amounts of of cash flowing into large corporations.
The bubble bursts: good, bad and ugly scenarios
At the very least, part of the soaring value of AI-related stocks is a bubble – and a bubble cannot stay inflated forever. It has to either burst on its own, or, ideally, be carefully deflated through targeted government or Central Bank measures. The current AI bubble could end in one of three scenarios: good, bad, or ugly.
Good: boom not bubble
During the dot-com bubble, many bad firms received too much money – the classic example was Pets.com. But the bubble also provided financing to companies like Google, which (arguably) contributed to making the internet a productivity-enhancing technology.
Something similar may happen with AI, as the current flurry of investment could, in the long run, create something good: technology that benefits humanity, and eventually yields return on investment. Without bubble-levels of cashflow, it would not be funded.
In this optimistic scenario I am assuming that AI, even though it may displace some jobs in the short term (as most technology does), will turn out to be good for workers. I am also assuming that it, obviously, won’t lead to the extinction of humanity. For this to be the case, governments need to introduce proper, robust regulations. It is also important to emphasise that there is no need for countries to invent or invest in new technologies – they must adapt them and provide applications to make them useful.
Bad: a gentle burst
All bubbles eventually burst. As things stand, we do not know when this will happen, nor the extent of the potential damage, but there will probably be a market correction when enough investors realise that multiple companies are overvalued. This decline in the stock market is bound to cause a recession.
Hopefully, it will be short-lived like the 2001 recession that followed the burst of the dot-com bubble. While no recession is painless, this one was relatively mild, and lasted less than one year in the US.
However, the burst of the AI bubble may be more painful because more households participate (either directly or indirectly via mutual funds) in the stock market than 20 years ago.
Even though the job of Central Banks is not to control asset prices, they may need to consider raising interest rates to deflate the bubble before it gets too large. The more sudden the crash, the deeper and costlier any ensuing recession will be.
Ugly: crash and burn
The burst of the AI-bubble would be ugly if it shares more features than we imagine with the 2000s housing bubble. On the positive side, AI stocks are not houses. This is good because when housing bubbles burst, the impacts on the economy are larger and longer-lasting than with other assets.
The housing bubble alone did not cause the 2008 financial crisis – it also caused the global financial system to collapse. Another reason to be optimistic is that the role of commercial banks in AI finance is much smaller than in housing – a vast amount of every bank’s money is perpetually tied up in mortgages.
However, one important caveat is that we do not how the financial system will react if these huge AI companies default on their debt. Alarmingly, this seems to be how they are currently financing new investments – a recent Bank of America analysis warned that large tech companies are relying heavily on debt to build new data centres, many of which are to cover demand that doesn’t actually exist yet.
A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!
You might not know it from the headlines, but there is some good news about the global fight against climate change.
A decade ago, the cheapest way to meet growing demand for electricity was to build more coal or natural gas power plants. Not anymore. Solar and wind power aren’t just better for the climate; they’re also less expensive today than fossil fuels at utility scale, and they’re less harmful to people’s health.
Yet renewable energy projects face headwinds, including in the world’s fast-growing developing countries. I study energy and climate solutions and their impact on society, and I see ways to overcome those challenges and expand renewable energy – but it will require international cooperation.
Falling clean energy prices
As their technologies have matured, solar power and wind power have become cheaper than coal and natural gas for utility-scale electricity generation in most areas, in large part because the fuel is free. The total global power generation from renewable sources saved US$467 billion in avoided fuel costs in 2024 alone.
As a result of falling prices, over 90% of all electricity-generating capacity added worldwide in 2024 came from clean energy sources, according to data from the International Renewable Energy Agency.
Burning coal, oil and natural gas releases tiny particles into the air along with toxic gases; these pollutants can make people sick. A recent study found air pollution from fossil fuels causes an estimated 5 million deaths worldwide a year, based on 2019 data.
Fossil fuels are also the leading sources of climate-warming greenhouse gases. When they’re burned to generate electricity or run factories, vehicles and appliances, they release carbon dioxide and other gases that accumulate in the atmosphere and trap heat near the Earth’s surface. That accumulation has been raising global temperatures and causing more heat stress, respiratory illnesses and the spread of disease.
Electrifying buildings, cars and appliances, and powering them with renewable energy, reduces these air pollutants while slowing climate change.
So what’s the problem?
In spite of the demonstrated economic and health benefits of transitioning to renewable energy, regulatory inertia, political gridlock and a lack of investment are holding back renewable energy deployment in much of the world.
The 2024 Energy Permitting Reform Act introduced by Sens. Joe Manchin, a Democrat from West Virginia, and John Barrasso, a Republican from Wyoming, to speed approvals failed to pass. Manchin called it “just another example of politics getting in the way of doing what’s best for the country.”
These countries need to meet soaring energy demand. The International Energy Agency expects emerging economies to account for 85% of added electricity demand from 2025 through 2027. Yet renewable energy development lags in most of them. The main reason is the high price of financing renewable energy construction.
Most of the cost of a renewable energy project is incurred up front in construction. Savings occur over its lifetime because it has no fuel costs. As a result, the levelized cost of energy (LCOE) for those projects varies depending on the cost of financing to build them. The chart shows what happens when borrowing costs are higher in developed countries. It illustrates the share of financing in each project’s levelized cost of energy in 2024 versus the weighted average cost of capital (WACC). The yellow dots are solar projects; black and gray are offshore and onshore wind. Adapted from IRENA, 2025, CC BY
In many developing countries, wind and solar projects cost more to finance than coal or gas. Fossil projects have a longer history, and financial and policy mechanisms have been developed over decades to lower lender risk for those projects. These include government payment guarantees, stable fuel contracts and long-term revenue deals that help guarantee the lender will be repaid.
Both lenders and governments have less experience with renewable energy projects. As a result, these projects often come with weaker government guarantees. This raises the risk to lenders, so they charge higher interest rates, making renewable projects more expensive upfront, even if the projects have lower lifetime costs.
To lower borrowing costs, governments and international development banks can take steps to make renewable projects a safer bet for investors. For example, they can keep energy policies stable and use public funds or insurance to cover part of the lenders’ investment risk.
Without international cooperation to lower finance costs, developing economies could miss out on the renewable-energy revolution and lock in decades of growing greenhouse gas emissions from fossil fuels, making climate change worse.
Achieving this goal won’t be easy, but it is significantly less difficult now that renewable energy is more affordable over the long run than fossil fuels.
Switching the world’s power supply to renewable energy and electrifying buildings and local transportation would cut about half of today’s greenhouse-gas emissions. The other half comes from sectors where it is harder to cut emissions — steel, cement and chemical production, aviation and shipping, and agriculture and land use. Solutions are being developed but need time to mature. Good governance, political support and accessible finance will be critical for these sectors as well.
The transition to renewable energy offers big economic and health benefits alongside lower climate risks — if countries can overcome political obstacles at home and cooperate to expand financing for developing economies.
Most valuable company in the world with $4.5 trillion valuation
China exposure, data center and Q4 guidance in focus
Shares could move 6.2% ↑ or ↓ post earnings
Some normality may return to markets after Trump signed a bill to end the longest government shutdown in US history.
However, storm clouds could be on the horizon due to the resumption of federal economic data.
Any nasty surprises could inject more volatility into a week already packed with key data and earnings from Nvidia – the world’s most valuable company.
Monday, 17th November
CAD: Canada CPI, housing starts
JPY: Japan industrial production, GDP
NZD: New Zealand food prices
NAS100: US Empire State manufacturing, Fed speeches
Tuesday, 18th November
AUD: RBA meeting minutes
US30: Home Depot earnings, Fed speeches
Wednesday, 19th November
EUR: Eurozone CPI
JP225: Japan machinery orders, trade
ZAR: South Africa CPI, retail sales
GBP: UK CPI
NAS100: FOMC minutes, Nvidia earnings
Thursday, 20th November
CN50: China loan prime rates
EU50: Eurozone consumer confidence
ZAR: South Africa rate decision
US30: Fed speeches, Walmart earnings.
Friday, 21st November
CAD: Canada retail sales
EUR: Eurozone HCOB manufacturing PMI, ECB President Christine Lagarde speech
JPY: Japan CPI, S&P Global manufacturing PMI
GBP: UK S&P Global manufacturing PMI, retail sales
NAS100: US University of Michigan consumer sentiment, S&P Global manufacturing PMI, Fed speeches
FXTM’s NAS100 which tracks the benchmark Nasdaq100 index is almost 20% year-to-date.
But growing chatter around an AI bubble amid massive investments and circular business deals has sparked multiple selloffs.
Nvidia is slated to report quarterly results next Wednesday which could be a make-or-break moment for the AI rally.
Watch out for these 3 key factors:
1) Nvidia earnings
All eyes will be on Nvidia’s latest quarterly earnings after US markets close on Wednesday, November 19th.
For a company that remains at the heart of the A.I. hype, investors will be looking for another round of solid earnings that would justify its nearly 120% rebound from 2025 lows. Any fresh updates on Blackwell deliveries, exposure to China and guidance for Q4 will be in sharp focus.
Essentially, the bar remains very high for Nvidia with very little room for disappointment.
The AI chip giant expected to post earnings of $1.25 a share, and a rise in quarterly revenue to $55 billion – marking a 57% increase from a year ago.
What does this mean for the NAS100?
Nvidia is the biggest constituent in the Nasdaq 100, accounting for roughly 14%.
Should Nvidia’s earnings satisfy investors’ lofty expectations and portray an encouraging business outlook, this could push the NAS100 higher.
If Nvidia’s earnings disappoint in the slightest, this could trigger a selloff in the NAS100.
2) Government reopening data dump
After a 43-day shutdown that began on October 1st, investors have been kept in the dark regarding the US economy.
It’s worth noting that the shutdown is expected to have cost the economy $15 billion a week with the Congressional Budget Office projecting it to lower real GDP growth in the current quarter by 1.5%.
Markets may be injected with heightened levels of volatility as the government resumes releasing economic figures as soon as next week.
Note: These reports include the September and October jobs report, among others.
Should data flag weakness in the US economy, this could hit US equities before prices potentially rebound on Fed cut bets.
Stronger than expected data could boost US equities before they slip on cooling Fed cut expectations.
3) Technical forces
The NAS100 remains in a bullish channel on the daily charts but concerns around an AI bubble and easing Fed cut bets continue to fuel downside pressures.
Should the 50-day SMA prove reliable support, prices may rebound back toward 25700 and 26300.
Weakness below 24700 may trigger a selloff toward the 100-day SMA around 24000.
This is the second in a two-part series. Read part one here.
Globalisation has always had its critics – but until recently, they have come mainly from the left rather than the right.
In the wake of the second world war, as the world economy grew rapidly under US dominance, many on the left argued that the gains of globalisation were unequally distributed, increasing inequality in rich countries while forcing poorer countries to implement free-market policies such as opening up their financial markets, privatising their state industries and rejecting expansionary fiscal policies in favour of debt repayment – all of which mainly benefited US corporations and banks.
This was not a new concern. Back in 1841, German economist Friedrich List had argued that free trade was designed to keep Britain’s global dominance from being challenged, suggesting:
When anyone has obtained the summit of greatness, he kicks away the ladder by which he climbs up, in order to deprive others of the means of climbing up after him.
By the 1990s, critics of the US vision of a global world order such as the Nobel-winning economist Joseph Stiglitz argued that globalisation in its current form benefited the US at the expense of developing countries and workers – while author and activist Naomi Klein focused on the negative environmental and cultural consequences of the global expansion of multinational companies.
Mass left-led demonstrations broke out, disrupting global economic meetings including, most famously, the World Trade Organization (WTO) in 1999. During this “battle of Seattle”, violent exchanges between protesters and police prevented the launch of a new world trade round that had been backed by then US president, Bill Clinton. For a while, the mass mobilisation of a coalition of trade unionists, environmentalists and anti-capitalist protesters seemed set to challenge the path towards further globalisation – with anti-capitalism “Occupy” protests spreading around the world in the wake of the 2008 financial crash.
A documentary about the 1999 ‘batte of Seattle’, directed by Jill Friedberg and Rick Rowley.
In the US, a further critique of globalisation centred on its domestic consequences for American workers – namely, job losses and lower pay – and led to calls for greater protectionism. Although initially led by trade unions and some Democratic politicians, this critique gradually gained purchase in radical right circles who opposed giving any role to international organisations like the WTO, on the grounds that they impinged on American sovereignty. According to this view, only by stopping foreign competition whose low wages undercut American workers could prosperity be restored. Immigration was another target.
Under Donald Trump’s second term as US president, these criticisms have been transformed into radical, deeply disruptive economic and social policies – with tariffs and protectionism at their heart. In so doing, Trump – despite all his grandstanding on the world stage – has confirmed what has long been clear to close observers of US politics and business: that the American century of global dominance, with the dollar as unrivalled no.1 currency, is drawing rapidly to a close.
Even before Trump first took office in 2017, the US had begun to withdraw from its leadership role in international economic institutions such as the WTO. Now, the strongest part of its economy, the hi-tech sector, is under intense pressure from China, whose economy is already bigger than the US’s by one key measure of GDP. Meanwhile, the majority of US citizens are facing stagnant incomes, higher prices and more insecure jobs.
In previous centuries, when first France and then Great Britain reached the end of their eras of world domination, these transitions had painful impacts beyond their borders. This time, with the global economy more closely integrated than ever before and no single dominant power waiting in the wings to take over, the impacts could be felt even more widely – with very damaging, if not catastrophic, results.
Why no one is ready to take the US’s place
When it comes to taking over from the US as the world’s leading hegemonic power, the only viable candidates with big enough economies are the European Union and China. But there are strong reasons to doubt that either could take on this role – notwithstanding the fact that in 2022, then US president Joe Biden’s National Security Strategy called China: “The only competitor with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military and technological power to do so.”
At times Biden’s successor, President Trump, has sounded almost jealous of the control China’s leaders exert over their national economy, and the fact they do not face elections and limits on their terms in office. But a one-party, authoritarian political system which lacks legal checks and balances is a key reason China will find it hard to gain the cultural and political dominance among democratic nations that is part of achieving world no.1 status – despite the influence it already wields in large parts of Asia and Africa.
China still faces big economic challenges too. While it is already the global leader in manufactured goods (rapidly moving into hi-tech products) and the world’s largest exporter, its economy is still very unbalanced – with a much smaller consumer sector, a weak property market, many inefficient state industries that are highly indebted, and a relatively small financial sector restricted by state ownership. Nor does China possess a global currency, despite its (limited) attempts to make the renminbi a truly international currency.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
As I found on a reporting trip to Shanghai in 2007 to investigate the effects of globalisation, there are also enormous differences between China’s prosperous coastal megacities – whose main thoroughfares rival New York and Paris – and the relative poverty in the interior, especially in rural areas. But nearly two decades on from that visit, with the country’s growth rate slowing, many university-educated young people are also finding it hard to find well-paid jobs now.
Meanwhile Europe – the only other contender to take the US’s place as global no.1 – is deeply politically divided, with smaller, weaker economies to the east and south far more sceptical about the benefits of globalisation, and increasingly divided on issues such as migration and the Ukraine war. The challenges of achieving broad policy agreement among all member states, and the problem of who can speak for Europe, make it unlikely that the EU as currently constituted could initiate and enforce a new global world order on its own.
The EU’s financial system also lacks the heft of the US’s. Although it has a common currency (the euro) managed by the European Central Bank, its financial system is far more fragmented. Banks are regulated nationally, and each country issues its own government bonds (although a few eurobonds now exist). This makes it hard for the euro to replace the dollar as a store of value, and reduces the incentive for foreigners to hold euros as an alternative reserve currency.
Meanwhile, any future prospects of a renewal of US global leadership look similarly unpromising. Trump’s policy of cutting taxes while increasing the size of the US government debt – which now stands at US$38 trillion, or 120% of GDP – threatens both the stability of the world economy and the ability of the US to finance this mind-boggling deficit.
US national debt hits record high. Video: The Economic Times.
Tellingly, the Trump administration shows no interest in reviving, or even engaging with, many of the international financial institutions which America once dominated, and which helped shape the world economic order – as US trade representative Jamieson Greer expressed disdainfully in the New York Times recently:
Our current, nameless global order, which is dominated by the WTO and is notionally designed to pursue economic efficiency and regulate the trade policies of its 166 member countries, is untenable and unsustainable. The US has paid for this system with the loss of industrial jobs and economic security, and the biggest winner has been China.
While the US is not, so far, withdrawing from the IMF, the Trump administration has urged it to call out China for running such a large trade surplus, while abandoning its concern about climate change. Greer concluded that the US has “subordinated our country’s economic and national security imperatives to a lowest common denominator of global consensus”.
World without a global no.1
To understand the potential dangers ahead, we must go back more than a century to the last time there was no global hegemon. By the time the first world war officially ended with the signing of the Treaty of Versailles on June 28 1919, the international economic order had collapsed. Britain, world leader over the previous century, no longer possessed the economic, political or military clout to enforce its version of globalisation.
The UK government, burdened by the huge debts it had taken out to finance the war effort, was forced to make major cuts in public spending. In 1931, it faced a sterling crisis: the pound had to be devalued as the UK exited from the gold standard for good, despite having yielded to the demands of international bankers to cut payments to the unemployed. This was a final sign that Britain had lost its dominant place in the world economic order.
The 1930s were a time of deep political unease and unrest in Britain and many other countries. In 1936, unemployed workers from Jarrow, a town in north-east England with 70% unemployment after its shipyards closed, organised a non-political “hunger march” to London which became known as the Jarrow crusade. More than 200 men, dressed in their Sunday best, marched peacefully in step for over 200 miles, gaining great support along the way. Yet when they reached London, prime minister Stanley Baldwin ignored their petition – and the men were informed their dole money would be docked because they had been unavailable for work over the past fortnight.
Europe was also facing a severe economic crisis. After Germany’s government refused to pay the reparations agreed in the 1919 Versailles treaty, saying they would bankrupt its economy, the French army occupied the German industrial heartland of the Ruhr and German workers went on strike, supported by their government. The ensuing struggle fuelled hyperinflation in Germany. By November 1923, it took 200,000 million marks to buy a loaf of bread, and the savings and pensions of the German middle class were wiped out. That month, Adolf Hitler made his first attempt to seize power in the failed “Beer hall putsch” in Munich.
In contrast, across the Atlantic, the US was enjoying a period of postwar prosperity, with a booming stock market and explosive growth of new industries such as car manufacturing. But despite emerging as the world’s strongest economic power, having financed much of the Allied war effort, it was unwilling to grasp the reins of global economic leadership.
The Republican US Congress, having blocked President Woodrow Wilson’s plan for a League of Nations, instead embraced isolationism and washed its hands of Europe’s problems. The US refused to cancel or even reduce the war debts owed it by the Allied nations, who eventually repudiated their debts. In retaliation, the US Congress banned all American banks from lending money to these so-called allies.
Then, in 1929, the affluent American “jazz age” came to an abrupt halt with a stock market crash that wiped off half its value. The country’s largest manufacturer, Ford, closed its doors for a year and laid off all its workers. With a quarter of the nation unemployed, long lines for soup kitchens were seen in every city, while those who had been evicted camped out wherever they could – including in New York’s Central Park, renamed “Hooverville” after the hapless US president of that time, Herbert Hoover.
In rural areas where the collapse in agricultural prices meant farmers could no longer make a living, armed farmers stopped food and milk trucks and destroyed their contents in a vain attempt to limit supply and raise prices. By March 1933, as President Franklin D. Roosevelt took office, the entire US banking system had ground to a standstill, with no one able to withdraw money from their bank account.
With its focus on this devastating Great Depression, the US refused to get involved in attempts at international economic cooperation. With no notice, Roosevelt withdrew from the 1933 London Conference which had been called to stabilise the world’s currencies – sending a message denouncing “the old fetishes of the so-called international bankers”.
With the US following the UK off the gold standard, the resulting currency wars exacerbated the crisis and further weakened European economies. As countries reverted to mercantilist policies of protectionism and trade wars, world trade shrank dramatically.
The situation became even worse in central Europe, where the collapse of the huge Credit-Anstalt bank in Austria in 1931 reverberated around the region. In Germany, as mass unemployment soared, centrist parties were squeezed and armed riots broke out between communist and fascist supporters. When the Nazis came to power, they introduced a policy of autarky, cutting economic ties with the west to build up their military machine.
The economic rivalries and antagonisms which weakened western economies paved the way for the rise of fascism in Germany. In some sense, Hitler – an admirer of the British empire – aspired to be the next hegemonic economic as well as military power, creating his own empire by conquering and ruthlessly exploiting the resources of the rest of Europe.
Troubled by rampant hyperinflation, Germans queue up with large bags to withdraw money from Berlin’s Reichsbank in 1923. Bundesarchiv/Wikimedia, CC BY-NC-SA
Nearly a century later, there are some disturbing parallels with that interwar period. Like America after the first world war, Trump insists that countries the US has supported militarily now owe it money for this protection. He wants to encourage currency wars by devaluing the dollar, and raise protectionist barriers to protect domestic industry. The 1920s was also a time when the US sharply limited immigration on eugenic grounds, only allowing it from northern European countries which (the eugenicists argued) would not “pollute the white race”.
Clearly, Trump does not view the lack of international cooperation that could amplify the damaging economic effects of a stock or bond market crash as a problem that should concern him. And in today’s unstable world, for all the US’s past failings as a global leader, that is a very worrying proposition.
How the US responded to the last financial crisis
Once again, the rules of the international order are breaking down. While it is possible that Trump’s approach will not be fully adopted by his successor in the White House, the direction of travel in the US will almost certainly remain sceptical about the benefits of globalisation, with limited support for any worldwide economic rules or initiatives.
We see similar scepticism about the benefits of globalisation emerging in other countries, amid the rise of rightwing populist parties in much of Europe and South America – many backed by Trump. Fuelling these parties’ support are growing concerns about income inequality, slow growth and immigration which are not being addressed by the current political system – and all of which would be exacerbated by the onset of a new global economic crisis.
With the global economy and financial system far bigger than ever before, a new crisis could be even more severe than the one that occurred in 2008, when the failure of the banking system left the world teetering on the brink of collapse.
The scale of this crisis was unprecedented, but key US and UK government officials moved boldly and swiftly. As a BBC reporter in Washington, I attended the House of Representatives’ Financial Services Committee hearing three days after Lehman Brothers went bankrupt, paralysing the global financial system, to find out the administration’s response. I remember the stunned look on the face of the committee’s chairman, Barney Frank, when he asked US Treasury secretary Hank Paulson and US Federal Reserve chairman Ben Bernanke how much money they might need to stabilise the situation:
“Let’s start with US$1 trillion,” Bernanke replied coolly. “But we have another US$2 trillion on our balance sheet if we need it.”
Documentary on the collapse of Lehman Brothers bank in September 2008.
Shortly afterwards, the US Congress approved a US$700 billion rescue package. While the global economy has still not fully recovered from this crisis, it could have been far worse – possibly as bad as the 1930s – without such intervention.
Around the world, governments ended up pledging US$11 trillion to guarantee the solvency of their banking systems, with the UK government putting up a sum equivalent to the country’s entire yearly GDP. But it was not just governments. At the G20 summit in London in April 2009, a new US$1.1 trillion fund was set up by the International Monetary Fund (IMF) to advance money to countries that were getting into financial difficulty.
The G20 also agreed to impose tougher regulatory standards for banks and other financial institutions that would apply globally, to replace the weak regulation of banks that had been one of the main causes of the crisis. As a reporter at this summit, I recall widespread excitement and optimism that the world was finally working together to tackle its global problems, with the host prime minister, Gordon Brown, briefly glowing in the limelight as organiser of that summit.
Behind the scenes, the US Federal Reserve had also been working to contain the crisis by quietly passing on to the world’s other leading central banks nearly US$600 billion in “currency swaps” to ensure they had the dollars they needed to bail out their own banking systems. The Bank of England secretly lent UK banks £100 billion to ensure they didn’t collapse, although two of the four major banks, Royal Bank of Scotland (now NatWest) and Lloyds, ultimately had to be nationalised (to different extents) to keep the financial system stable.
However, these rescue packages for banks, while much needed to stabilise the global economy, did not extend to many of the victims of the crash – such as the 12 million US households whose homes were now worth less than the mortgage they had taken out to pay for them, or the 40% of households who experienced financial distress during the 18 months after the crash. And the ramifications of the crisis were even greater for those living in developing countries.
A few months after the 2008 financial crisis began, I travelled to Zambia, an African country totally dependent on copper exports for its foreign exchange. I visited the Luanshya copper mine near Ndola in the country’s copper belt. With demand for copper (used mainly in construction and car manufacturing) collapsing, all the copper mines had closed. Their workers, in one of the few well-paid jobs in Zambia, were forced to leave their comfortable company homes and return to sharing with their relatives in Lusaka without pay.
Zambia’s government was forced to shut down its planned poverty reduction plan, which was to be funded by mining profits. The collapse in exports also damaged the Zambian currency, which dropped sharply. This hit the country’s poorest people hard as it raised the price of food, most of which was imported.
The ripple effects of the 2008 global financial crisis soon hit Luanshya copper mine in Zambia. Nerin Engineering Co., CC BY-SA
I also visited a flower farm near Lusaka, where Dutch expats Angelique and Watze Elsinga had been growing roses for export for over a decade – employing more than 200 workers who were given housing and education. As the market for Valentine’s Day roses collapsed, their bankers, Barclays South Africa, suddenly ordered them to immediately repay all their loans, forcing them to sell their farm and dismiss their workers. Ultimately, it took a US$3.9 billion loan from the IMF and World Bank to stabilise Zambia’s economy.
Should another global financial crisis hit, it is hard to see the Trump administration (and others that follow) being as sympathetic to the plight of developing countries, or allowing the Federal Reserve to lend major sums to foreign central banks – unless it is a country politically aligned with Trump, such as Argentina. Least likely of all is the idea of Trump working with other countries to develop a global trillion-dollar rescue package to help save the world economy.
Rather, there is a real worry that reckless actions by the Trump administration – and weak global regulation of financial markets – could trigger the next global financial crisis.
What happens if the US bond market collapses?
Economic historians agree that financial crises are endemic in the history of global capitalism, and they have been increasing in frequency since the “hyper-globalisation” of the 1970s. From Latin America’s debt crisis in the 1980s to the Asia currency crisis in the late 1990s and the US dotcom stock market collapse in the early 2000s, crises have regularly devastated economies and regions around the world.
Today, the greatest risk is the collapse of the US Treasury bond market, which underpins the global financial system and is involved in 70% of global financial transactions by banks and other financial institutions. Around the world, these institutions have long regarded the US bond market, worth over $30 trillion, as a safe haven, because these “debt securities” are backed by the US central bank, the Federal Reserve.
Increasingly, the unregulated “shadow banking system” – a sector now larger than regulated global banks – is deeply involved in the bond market. Non-bank financial institutions such as private equity, hedge funds, venture capital and pension funds are largely unregulated and, unlike banks, are not required to hold reserves.
Bond market jitters are already unnerving global financial markets, which fear its unravelling could precipitate a banking crisis on the scale of 2008 – with highly leveraged transactions by these non-bank financial institutions leaving them exposed.
US bonds play a key role in maintaining the stability of the global economy. Video: Wall Street Journal.
Buyers of US bonds are also troubled by the Trump administration’s plan to raise the US deficit even higher to pay for tax cuts – with the national debt now forecast to rise to 134% of US GDP by 2035, up from 120% in 2025. Should this lead to a widespread refusal to buy more US bonds among jittery investors, their value would collapse and interest rates – both in the US and globally – would soar.
The governor of the Bank of England, Andrew Bailey, recently warned that the situation has “worrying echoes of the 2008 financial crisis”, while the head of the IMF, Kristalina Georgieva, said her worries about the collapse of private credit markets sometimes keep her awake at night.
A bad situation would grow even worse if problems in the bond market precipitate a sharp decline in the value of the dollar. The world’s “anchor currency” would no longer be seen as a safe store of value – leading to more withdrawals of funds from the US Treasury bond market, where many foreign governments hold their reserves.
A weaker dollar would also make imported goods more expensive for US consumers, while potentially boosting the country’s exports. This is precisely the course of action advocated by Stephen Miran, chair of the US president’s Council of Economic Advisors – who Trump appears to want to be the next head of the Federal Reserve.
One example of what could happen if bond markets become destabilised occurred when the shortest-lived prime minister in UK history, Liz Truss, announced huge unfunded tax cuts in her 2022 budget, causing the value of UK gilts (the equivalent of US Treasury bonds) to plummet as interest rates spiked. Within days, the Bank of England was forced to put up an emergency £60 billion rescue fund to avoid major UK pension funds collapsing.
In the case of a US bond market crash, however, there are growing fears that the US government would be unable – and unwilling – to step in to mitigate such damage.
A new era of financial chaos
Just as worrying would be a crash of the US stock market – which, by historic standards, is currently vastly overvalued.
Huge recent increases in the US stock market’s overall value have been driven almost entirely by the “magnificent seven” hi-tech companies, which alone make up a third of its total value. If their big bet on artificial intelligence is not as lucrative as they claim, or is overshadowed by the success of China’s AI systems, a sharp downturn, similar to the dotcom crash of 2000-02, could well occur.
Jamie Dimon, head of the US’s biggest bank JPMorgan Chase, has said he is “far more worried than other [experts]” about a serious market correction, which he warned could come in the next six months to two years.
Big tech executives have been overoptimistic before. Reporting from Silicon Valley in 2001 as the dotcom bubble was bursting, I was struck by the unshakeable belief of internet startup CEOs that their share prices could only go up.
Furthermore, their companies’ high stock valuations had allowed them to take over their competitors, thus limiting competition – just as companies such as Google and Meta (Facebook) have since used their highly valued shares to purchase key assets and potential rivals including YouTube, WhatsApp, Instagram and DeepMind. History suggests this is always bad for the economy in the long run.
With the business and financial worlds now ever more closely linked, not only has the frequency of financial crises increased in the last half-century, each crisis has become more interconnected. The 2008 global financial crisis showed how dangerous this can be: a global banking crisis triggered stock market falls, collapses in the value of weak currencies, a debt crisis in developing countries – and ultimately, a global recession that has taken years to recover from.
The IMF’s latest financial stability report summarised the situation in worrying terms, highlighting “elevated” stability risks as a result of “stretched asset valuations, growing pressure in sovereign bond markets, and the increasing role of non-bank financial institutions. Despite its deep liquidity, the global foreign exchange market remains vulnerable to macrofinancial uncertainty.”
The IMF has warned about instability in the global financial system. Video: CGTN America.
I believe we may be entering a new era of sustained financial chaos during which the seeds sown by the death of globalisation – and Trump’s response to it – finally shatter the world economic and political order established after the second world war.
Trump’s high and erratically applied tariffs – aimed most strongly at China – have already made it difficult to reconfigure global supply chains. Even more worrying could be the struggle over the control of key strategic raw materials like the rare earth minerals needed for hi-tech industries, with China banning their export and the US threatening 100% tariffs in return (as well as hoping to take over Greenland, with its as-yet-untapped supply of some of these minerals).
This conflict over rare earths, vital for the computer chips needed for AI, could also threaten the market value of high-flying tech stocks such as Nvidia, the first company to exceed US$4 trillion in value.
The battle for control of critical raw materials could escalate. There is a danger that in some cases, trade wars might become real wars – just as they did in the former era of mercantilism. Many recent and current regional conflicts, from the first Iraq war aimed at the conquest of the oilfields of Kuwait, to the civil war in Sudan over control of the country’s goldmines, are rooted in economic conflicts.
The history of globalisation over the past four centuries suggests that the presence of a global superpower – for all its negative sides – has brought a degree of economic stability in an uncertain world.
In contrast, a key lesson of history is that a return to policies of mercantilism – with countries struggling to seize key natural resources for themselves and deny them to their rivals – is most likely a recipe for perpetual conflict. But this time around, in a world full of 10,000 nuclear weapons, miscalculations could be fatal if trust and certainty are undermined.
The challenges ahead are immense – and the weakness of international institutions, the limited visions of most governments and the alienation of many of their citizens are not optimistic signs.
This is the second in a two-part series. In case you missed it, read part one here.
This is the first in a two-part series. Read part two here.
For nearly four centuries, the world economy has been on a path of ever-greater integration that even two world wars could not totally derail. This long march of globalisation was powered by rapidly increasing levels of international trade and investment, coupled with vast movements of people across national borders and dramatic changes in transportation and communication technology.
According to economic historian J. Bradford DeLong, the value of the world economy (measured at fixed 1990 prices) rose from US$81.7 billion (£61.5 billion) in 1650, when this story begins, to US$70.3 trillion (£53 trillion) in 2020 – an 860-fold increase. The most intensive periods of growth corresponded to the two periods when global trade was rising fastest: first during the “long 19th century” between the end of the French revolution and start of the first world war, and then as trade liberalisation expanded after the second world war, from the 1950s up to the 2008 global financial crisis.
Now, however, this grand project is on the retreat. Globalisation is not dead yet, but it is dying.
Is this a cause for celebration, or concern? And will the picture change again when Donald Trump and his tariffs of mass disruption leave the White House? As a longtime BBC economics correspondent who was based in Washington during the global financial crisis, I believe there are sound historical reasons to worry about our deglobalised future – even once Trump has left the building.
Trump’s tariffs have amplified the world’s economic problems, but he is not the root cause of them. Indeed, his approach reflects a truth that has been emerging for many decades but which previous US administrations – and other governments around the world – have been reluctant to admit: namely, the decline of the US as the world’s no.1 economic power and engine of world growth.
In each era of globalisation since the mid-17th century, a single country has sought to be the clear world leader – shaping the rules of the global economy for all. In each case, this hegemonic power had the military, political and financial power to enforce these rules – and to convince other countries that there was no preferable path to wealth and power.
But now, as the US under Trump slips into isolationism, there is no other power ready to take its place and carry the torch for the foreseeable future. Many people’s pick, China, faces too many economic challenges, including its lack of a truly international currency – and as a one-party state, nor does it possess the democratic mandate needed to gain acceptance as the world’s new dominant power.
While globalisation has always produced many losers as well as winners – from the slave trade of the 18th century to displaced factory workers in the American Midwest in the 20th century – history shows that a deglobalised world can be an even more dangerous and unstable place. The most recent example came during the interwar years, when the US refused to take up the mantle left by the decline of Britain as the 19th century’s hegemonic global power.
In the two decades from 1919, the world descended into economic and political chaos. Stock market crashes and global banking failures led to widespread unemployment and increasing political instability, creating the conditions for the rise of fascism. Global trade declined sharply as countries put up trade barriers and started self-defeating currency wars in the vain hope of giving their countries’ exports a boost. On the contrary, global growth ground to a halt.
A century on, our deglobalising world is vulnerable again. But to chart whether this means we are destined for a similarly chaotic and unstable future, we first need to explore the birth, growth and reasons behind the imminent demise of this extraordinary global project.
French model: mercantilism, money and war
By the mid-1600s, France had emerged as the strongest power in Europe – and it was the French who developed the first overarching theory of how the global economy could work in their favour. Nearly four centuries later, many aspects of “mercantilism” have been revived by Trump’s US playbook, which could be entitled How To Dominate the World Economy by Weakening Your Rivals.
France’s version of mercantilism was based on the idea that a country should put up trade barriers to limit how much other countries could sell to it, while boosting its own industries to ensure that more money (in the form of gold) came into the country than left it.
England and the Dutch Republic had already adopted some of these mercantilist policies, establishing colonies around the globe run by powerful monopolistic trading companies that aimed to challenge and weaken the Spanish empire, which had prospered on the gold and silver it seized in the Americas. In contrast to these “seaborne empires”, the much larger empires in the east such as China and India had the internal resources to generate their own revenue, meaning international trade – although widespread – was not critical to their prosperity.
But it was France which first systematically applied mercantilism across the whole of government policy – led by the powerful finance minister Jean-Baptiste Colbert (1661-1683), who had been granted unprecedented powers to strengthen the financial might of the French state by King Louis XIV. Colbert believed trade would boost the coffers of the state and strengthen France’s economy while weakening its rivals, stating:
It is simply, and solely, the absence or abundance of money within a state [which] makes the difference in its grandeur and power.
In Colbert’s view, trade was a zero-sum game. The more France could run a trade surplus with other countries, the more gold bullion it could accumulate for the government and the weaker its rivals would become if deprived of gold. Under Colbert, France pioneered protectionism, tripling its import tariffs to make foreign goods prohibitively expensive.
At the same time, he strengthened France’s domestic industries by providing subsidies and granting them monopolies. Colonies and government trading companies were established to ensure France could benefit from the highly lucrative trade in goods such as spices, sugar – and slaves.
Colbert oversaw the expansion of French industries into areas like lace and glass-making, importing skilled craftsmen from Italy and granting these new companies state monopolies. He invested heavily in infrastructure such as the Canal du Midi, and dramatically increased the size of France’s navy and merchant marine to challenge its British and Dutch rivals.
Global trade at this time was highly exploitative, involving the forced seizure of gold and other raw materials from newly discovered lands (as Spain had been doing with its conquests in the New World from the late 15th century). It also meant benefiting from the trade in humans, with huge profits as slaves were seized and sent to the Caribbean and other colonies to produce sugar and other crops.
In this era of mercantilism, trade wars often led to real wars, fought across the globe to control trade routes and seize colonies. Following Colbert’s reforms, France began a long struggle to challenge the overseas empires of its maritime rivals, while also engaging in wars of conquest in continental Europe.
France initially enjoyed success in the 17th century both on land and sea against the Dutch. But ultimately, its state-run French Indies company was no rival to the ruthless, commercially driven activities of the Dutch and British East India companies, which delivered enormous profits to their shareholders and revenues for their governments.
Indeed, the huge profits made by the Dutch from the Far Eastern spice trade explains why they had no hesitation in handing over their small North American colony of New Amsterdam, in return for expelling the British from a small toehold of one of their spice islands in what is now Indonesia. In 1664, that Dutch outpost was renamed New York.
After a century of conflict, Britain gradually gained ascendancy over France, conquering India and forcing its great rival to cede Canada in 1763 after the Seven Years war. France never succeeded in fully countering Britain’s naval strength. Resounding defeats by fleets led by Horatio Nelson in the early 19th century, coupled with Napoleon’s defeat at Waterloo by a coalition of European powers, marked the end of France’s time as Europe’s hegemonic power.
The battle of Trafalgar, off southwestern Spain in October 1805, was decisive in ending France’s era of dominance. Yale Center for British Art/Wikimedia
But while the French model of globalisation ultimately failed in its attempt to dominate the world economy, that has not prevented other countries – and now President Trump – from embracing its principles.
France found that tariffs alone could not sufficiently fund its wars nor boost its industries. Its broad version of mercantilism led to endless wars that spread around the globe, as countries retaliated both economically and militarily and tried to seize territories.
More than two centuries later, there is an uncomfortable parallel with what the results of Trump’s endless tariff wars might bring, both in terms of ongoing conflict and the organisation of rival trade blocs. It also shows that more protectionism, as proposed by Trump, will not be enough to revive the US’s domestic industries.
British model: free trade and empire
The ideology of free trade was first spelled out by British economists Adam Smith and David Ricardo, the founding fathers of classical economics. They argued trade was not a zero-sum game, as Colbert had suggested, but that all countries could mutually benefit from it. According to Smith’s classic text, The Wealth of Nations (1776):
If a foreign country can supply us with a commodity cheaper than we ourselves can make, better buy it off them with some part of the produce of our own industry, employed in such a way that we have some advantages.
As the world’s first industrial nation, by the 1840s Britain had created an economic powerhouse based on the new technologies of steam power, the factory system, and railroads.
Smith and Ricardo argued against the creation of state monopolies to control trade, proposing minimal state intervention in industry. Ever since, Britain’s belief in the benefits of free trade has proved stronger and more long-lasting than any other major industrial power – more deeply embedded in both its politics and popular imagination.
This ironclad commitment was born out of a bitter political struggle in the 1840s between manufacturers and landowners over the protectionist Corn Laws. The landowners who had traditionally dominated British politics backed high tariffs, which benefited them but resulted in higher prices for staples like bread. The repeal of the Corn Laws in 1846 upended British politics, signalling a shift of power to the manufacturing classes – and ultimately to their working-class allies once they gained the right to vote.
An Anti-Corn Law League meeting held in London’s Exeter Hall in 1846. Wikimedia
In time, Britain’s advocacy of free trade unleashed the power of its manufacturing to dominate global markets. Free trade was framed as the way to raise living standards for the poor (the exact opposite of President Trump’s claim that it harms workers) and had strong working-class support. When the Conservatives floated the idea of abandoning free trade in the 1906 general election, they suffered a devastating defeat – the party’s worst until 2024.
As well as trade, a central element in Britain’s role as the new global hegemonic power was the rise of the City of London as the world’s leading financial centre. The key was Britain’s embrace of the gold standard which put its currency, the pound, at the heart of the new global economic order by linking its value to a fixed amount of gold, ensuring its value would not fluctuate. Thus the pound became the worldwide medium of exchange.
This encouraged the development of a strong banking sector, underpinned by the Bank of England as a credible and trustworthy “lender of last resort” in a financial crisis. The result was a huge boom in international investment, opening access to overseas markets for British companies and individual investors.
In the late 19th century, the City of London dominated global finance, investing in everything from Argentinian railways and Malaysian rubber plantations to South African gold mines. The gold standard became a talisman of Britain’s power to dominate the world economy.
The pillars of Britain’s global economic dominance were a highly efficient manufacturing sector, a commitment to free trade to ensure its industry had access to global markets, and a highly developed financial sector which invested capital around the world and reaped the benefits of global economic development. But Britain also did not hesitate to use force to open up foreign markets – for example, during the Opium Wars of the 1840s, when China was compelled to open its markets to the lucrative trade in opium from British-owned India.
By the end of the 19th century, the British empire incorporated one quarter of the world’s population, providing a source of cheap labour and secure raw materials as well as a large market for Britain’s manufactured goods. But that was still not enough for its avaricious leaders: Britain also made sure that local industries did not threaten its interests – by undermining the Indian textile industry, for example, and manipulating the Indian currency.
In reality, globalisation in this era was about domination of the world economy by a few rich European powers, meaning that much global economic development was curtailed to protect their interests. Under British rule between 1750 and 1900, India’s share of world industrial output declined from 25% to 2%.
But for those at the centre of Britain’s global formal and informal empire, such as the middle-class residents of London, this was a halcyon time – as economist John Maynard Keynes would later recall:
For middle and upper classes … life offered, at a low cost and with the least trouble, conveniences, comforts and amenities beyond the compass of the richest and most powerful monarchs of other ages. The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole Earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep.
US model: protectionism to neoliberalism
While Britain enjoyed its century of global dominance, the United States embraced protectionism for longer after its foundation in 1776 than all other major western economies.
The introduction of tariffs to protect and subsidise emerging US industries had first been articulated in 1791 by the fledgling nation’s first treasury secretary, Alexander Hamilton – Caribbean immigrant, founding father and future subject of a record-breaking musical. The Whig party under Henry Clay and its successor, the Republican Party, were both strong supporters of this policy for most of the 19th century. Even as US industry grew to overshadow all others, its government maintained some of the highest tariff barriers in the world.
Founding father Alexander Hamilton on the front of a US$10 note from 1934. Wikimedia
Tariff rates rose to 50% in the 1890s with the backing of future president William McKinley, both to help industrialists and pay for generous pensions for 2 million civil war veterans and their dependants – a key part of the Republican electorate. It is no accident that President Trump has festooned the White House with pictures of Hamilton, Clay and McKinley – all supporters of protectionism and high tariffs.
In part, the US’s enduring resistance to free trade was because it had access to an internal supply of seemingly limitless raw materials, while its rapidly growing population, fuelled by immigration, provided internal markets that fuelled its growth while keeping out foreign competition.
By the late 19th century, the US was the world’s biggest steel producer with the largest railroad system in the world and was moving rapidly to exploit the new technologies of the second industrial revolution – based on electricity, petrol engines and chemicals. Yet it was only after the second world war that the US assumed the role of global superpower – in part because it was the only country on either side of the war that had not suffered severe damage to its economy and infrastructure.
In the wake of global destruction in Europe and Asia, the US’s dominance was political, military and cultural, as well as financial – but the US vision of a globalised world had some important differences from its British predecessor.
The US took a much more universalist and rules-based approach, focusing on the creation of global organisations that would establish binding regulations – and open up global markets to unfettered American trade and investment. It also aimed to dominate the international economic order by replacing the pound sterling with the US dollar as the global medium of exchange.
Within a week of its entry in the second world war, plans were laid to establish US global financial hegemony. The US treasury secretary, Henry Morgenthau, began work on establishing an “inter-allied stabilisation fund” – a playbook for post-war monetary arrangements which would enshrine the US dollar at its heart.
This led to the creation of the International Monetary Fund (IMF) and World Bank at the Bretton Woods conference in New Hampshire in 1944 – institutions dominated by the US, which encouraged other countries to adopt the same economic model both in terms of free trade and free enterprise. The Allied nations who were simultaneously meeting to establish the United Nations to try to ensure future world peace, having suffered the devastating effects of the Great Depression and war, welcomed the US’s commitment to shape a new, more stable economic order.
How the 1944 Bretton Woods deal ensured the US dollar would be the world’s dominant currrency. Video: Bloomberg TV.
As the world’s biggest and strongest economy, there was (initially) little resistance to this US plan for a new international economic order in its own image. The motive was as much political as economic: the US wanted to provide economic benefits to ensure the loyalty of its key allies and counter the perceived threat of a communist takeover – in complete contrast to Trump’s mercantilist view today that all other countries are out to “rip off” the US, and that its own military might means it has no real need for allies.
After the war finally ended, the US dollar, now linked to gold at a fixed rate of $35 per ounce to guarantee its stability, assumed the role as the free world’s principal currency. It was both used for global trade transactions and held by foreign central banks as their currency reserves – giving the US economy an “exorbitant privilege”. The stable value of the dollar also made it easier for the US government to sell Treasury bonds to foreign investors, enabling it to more easily borrow money and run up trade deficits with other countries.
The conditions were set for an era of US political, financial and cultural dominance, which saw the rise of globally admired brands such as McDonald’s and Coca Cola, as well as a powerful US marketing arm in the form of Hollywood. Perhaps even more significantly, the relaxed, well-funded campuses of California would prove a perfect petri dish for the development of new computer technologies – backed initially by cold war military investment – which, decades later, would lead to the birth of the big-tech companies that dominate the tech landscape today.
The US view of globalisation was broader and more interventionist than the British model of free trade and empire. Rather than having a formal empire, it wanted to open up access to the entire world economy, which would provide global markets for American products and services.
The US believed you needed global economic institutions to police these rules. But as in the British case, the benefits of globalisation were still unevenly shared. While countries that embraced export-led growth such as Japan, Korea and Germany prospered, other resource-rich but capital-poor countries such as Nigeria only fell further behind.
From dream to despair
Though the legend of the American dream grew and grew, by the 1970s the US economy was coming under increasing pressure – in particular from German and Japanese rivals, who by then had recovered from the war and modernised their industries.
Troubled by these perceived threats and a growing trade deficit, in 1971 President Richard Nixon stunned the world by announcing that the US was going off the gold standard – forcing other countries to bear the cost of adjustment for the US balance of payments crisis by making them revalue their currencies. This had a profound effect on the global financial system: within a decade, most major currencies had abandoned fixed exchange rates for a new system of floating rates, effectively ending the 1944 Bretton Woods settlement.
US president Richard Nixon announces the US is leaving the gold standard on August 15 1971.
The end of fixed exchange rates opened the door to the “financialisation” of the global economy, vastly expanding global investment and lending – much of it by US financial firms. This gave succour to the burgeoning neoliberal movement that sought to further rewrite the rules of the financial world order. In the 1980s and ’90s, these policy prescriptions became known as the Washington consensus: a set of rules – including opening markets to foreign investment, deregulation and privatisation – that was imposed on developing economies in crisis, in return for them receiving support from US-led organisations like the World Bank and IMF.
In the US, meanwhile, the increasing reliance on the finance and hi-tech sectors increased levels of inequality and fostered resentment in large parts of American society. Both Republicans and Democrats embraced this new world order, shaping US policy to favour their hi-tech and financial allies. Indeed, it was the Democrats who played a key role in deregulating the financial sector in the 1990s.
Meanwhile, the decline of US manufacturing industries accelerated, as did the gap between the incomes of those in the hinterland, where manufacturing was based, and residents of the large metropolitan cities.
By 2023, the lowest 50% of US citizens received just 13% of total personal income, while the top 10% received almost half (47%). The wealth gap was even greater, with the bottom 50% only having 6% of total wealth, while a third (36%) was held by just the top 1%. Since 1980, real incomes of the bottom 50% have barely grown for four decades.
The bottom half of the US population was suffering from a surge in “deaths of despair” – a term coined by the Nobel-winning economist Angus Deaton to describe high mortality rates from drug abuse, suicide and murder among younger working-class Americans. Rising costs of housing, medical care and university education all contributed to widespread indebtedness and growing financial insecurity. By 2019, a study found that two-thirds of people who filed for bankruptcy cited medical issues as a key reason.
The decline in US manufacturing accelerated after China was admitted to the World Trade Organization in 2001, increasing America’s soaring trade and budget deficit even more. Political and business elites hoped the move would open up the huge Chinese market to US goods and investment, but China’s rapid modernisation made its industry more competitive than its American rivals in many fields.
Ultimately, this era of intensive financialisation of the world economy created a series of regional and then global financial crises, damaging the economies of many Latin American and Asian economies. This culminated in the 2008 global financial crisis, precipitated by reckless lending by US financial institutions. The world economy took more than a decade to recover as countries wrestled with slower growth, lower productivity and less trade than before the crisis.
For those who chose to read it, the writing was on the wall for America’s era of global domination decades ago. But it would take Trump’s victory in the 2016 presidential election – a profound shock to many in the US “liberal establishment” – to make clear that the US was now on a very different course that would shake up the world.
Making a bad situation more dangerous
In my view, Trump is the first modern-day US president to fully understand the powerful alienation felt by many working-class American voters, who believed they were left out of the US’s immense post-war economic growth that so benefited the largely urban American middle classes. His strongest supporters have always been lower-middle-class voters from rural areas who are not college-educated.
Yet Trump’s key policies will ultimately do little for them. High tariffs to protect US jobs, expulsion of millions of illegal immigrants, dismantling protections for minorities by opposing DEI (diversity, equality and inclusion) programmes, and drastically cutting back the size of government will have increasingly negative economic consequences in the future, and are very unlikely to restore the US economy to its previous dominant position.
US president Donald Trump unveils his global tariff ‘hit list’ on April 3 2025. BBC News.
Long before he first became president, Trump hated the eye-watering US trade deficit (he’s a businessman, after all) – and believed that tariffs would be a key weapon for ensuring US economic dominance could be maintained. Another key part of his “America First” ideology was to repudiate the international agreements that were at the heart of the US’s postwar approach to globalisation.
In his first term, however, Trump (having not expected to win) was ill-prepared for power. But second time around, conservative thinktanks had spent years outlining detailed policies and identifying key personnel who could implement the radical U-turn in US economic policy.
Under Trump 2.0, we have seen a return to the mercantilist point of view reminiscent of France in the 17th and 18th centuries. His assertion that countries which ran a trade surplus with the US “were ripping us off” echoed the mercantilist belief that trade was a zero-sum game – rather than the 20th-century view, pioneered by the US, that globalisation brings benefits to all, no matter the precise balance of that trade.
Trump’s tax-and-tariff plans, which extend the tax breaks to the very rich while reducing benefits for the poor through benefit cuts and tariff-driven inflation, will increase inequality in the US.
At the same time, the passing of the One Big Beautiful Bill is predicted to add some US$3.5 trillion to US government debt – even after the Elon Musk-led “Department of Government Efficiency” cuts imposed on many Washington departments. This adds pressure to the key US Treasury bond market at the centre of the world financial system, and raises the cost of financing the huge US deficit while weakening its credit rating. Continuing these policies could threaten a default by the US, which would have devastating consequences for the entire global financial system.
For all the macho grandstanding from Trump and his supporters, his economic policies are a demonstration of American weakness, not strength. While I believe his highlighting of some of the ills of the US economy were overdue, the president is rapidly squandering the economic credibility and good will that the US built up in the postwar years, as well as its cultural and political hegemony. For people living in America and elsewhere, he is making a bad situation more dangerous – including for many of his most ardent supporters.
That said, even without Trump’s economic and societal disruptions, the end of the US era of hegemonic dominance would still have happened. Globalisation is not dead, but it is dying. The troubling question we all face now, is what happens next.
This is the first of a two-part Insights long read on the rise and fall of globalisation. Read part two here: why the next global financial meltdown could be much worse with the US on the sidelines.
Every few years since 1953, the Chinese government has unveiled a new master strategy for its economy: the all-important five-year plan.
For the most part, these blueprints have been geared at spurring growth and unity as the nation transformed from a rural, agrarian economy to an urbanized, developed powerhouse.
Their solution? More of the same. In pledging to deliver “high-quality development” through technological self-reliance, industrial modernization and expanded domestic demand, Beijing is doubling down on a state-led model that has powered its rise in recent years. President Xi Jinping and others who ironed out the 2026-2030 plan are betting that innovation-driven industrial growth might secure China’s future, even as questions loom about underpowered consumer spending and mounting economic risks.
As an expert on China’s political economy, I view China’s new five-year plan as being as much about power as it is about economics. Indeed, it is primarily a blueprint for navigating a new era of competition. As such, it risks failing to address the widening gap between surging industrial capacity and tepid domestic demand.
High-tech dreams
At the heart of the new plan are recommendations that put advanced manufacturing and tech innovation front and center. In practice, this means upgrading old-line factories, automating and “greening” heavy industry and fostering “emerging and future industries” such as aerospace, renewable energy and quantum computing.
To insulate China from export controls put in place by other countries to slow China’s ascent, Beijing is doubling down on efforts to “indigenize” critical technologies by pumping money into domestic companies while reducing dependence on foreign suppliers.
This quest for self-reliance is not just about economics but explicitly tied to national security.
Under Xi, China has aggressively pursued what the Chinese Communist Party calls “military-civil fusion” – that is, the integration of civilian innovation with military needs.
The new five-year plan is poised to institutionalize this fusion as the primary mechanism for defense modernization, ensuring that any breakthroughs in civilian artificial intelligence or supercomputing automatically benefit the People’s Liberation Army.
Reshaping global trade
China’s state-led push in high-tech industries is already yielding dividends that the new five-year plan seeks to extend. In the past decade, China has surged to global leadership in green technologies such as solar panels, batteries and electric vehicles thanks to hefty government support. Now, Beijing intends to replicate that success in semiconductors, advanced machinery, biotechnology and quantum computing.
Such ambition, if realized, could reshape global supply chains and standards.
But it also raises the stakes in China’s economic rivalry with advanced economies. Chinese prowess in building entire supply chains has spurred the United States and Europe to talk of reindustrialization to avoid any overreliance on Beijing.
By pledging to build “a modern industrial system with advanced manufacturing as the backbone” and to accelerate “high-level scientific and technological self-reliance,” the new plan telegraphs that China will not back down from its bid for tech dominance.
An elusive rebalancing
What the plan gives comparatively modest attention, however, is the lack of strong domestic demand.
Boosting consumer spending and livelihoods gets little more than lip service in the communiqué that followed the plenum at which the five-year plan was mapped out.
Chinese leaders did promise efforts to “vigorously boost consumption” and build a “strong domestic market,” alongside improvements to education, health care and social security. But these goals were listed only after the calls for industrial upgrading and tech self-sufficiency – suggesting old priorities still prevail.
And this will disappoint economists who have long urged Beijing to shift from an overt, export-led model and toward a growth model driven more by household consumption.
With local governments mired in debt and facing fiscal strain, there is skepticism that bold social spending or pro-consumption reforms will materialize anytime soon.
With Beijing reinforcing manufacturing even as domestic demand stays weak, the likelihood is extra output will be pushed abroad – especially when it comes to EVs, batteries and solar technologies – rather than be absorbed at home.
Beijing has traditionally portrayed its five-year plans as a boon not only for China but for the world. The official narrative, echoed by state media, emphasizes that a stable, growing China remains an “engine” of global growth and a “stabilizer” amid worldwide uncertainty.
Notably, the new plan calls for “high-level opening-up,” aligning with international trade rules, expanding free-trade zones and encouraging inbound investment – even as it pursues self-reliance.
Yet China’s drive to climb the technological ladder and support its industries will likely intensify competition in global markets – potentially at the expense of other countries’ manufacturers. In recent years, China’s exports have surged to record levels. This flood of cheap Chinese goods has squeezed manufacturers among trading partners from Mexico to Europe, which have begun contemplating protective measures. If Beijing now doubles down on subsidizing both cutting-edge and traditional industries, the result could be an even greater glut of Chinese products globally, exacerbating trade frictions.
In other words, the world may feel more of China’s industrial might but not enough of its buying power – a combination that could strain international economic relations.
A high-stakes bet on the future
With China’s 15th five-year plan, Xi Jinping is making a strategic bet on his long-term vision. There is no doubt that the plan is ambitious and comprehensive. And if successful, it could guide China to technological heights and bolster its claim to great-power status.
But the plan also reveals Beijing’s reluctance to depart from a formula that has yielded growth at the cost of imbalances that have hurt many households across the vast country.
Rather than fundamentally shift course, China is trying to have it all ways: pursuing self-reliance and global integration, professing openness while fortifying itself, and promising prosperity for the people while pouring resources into industry and defense.
But Chinese citizens, whose welfare is ostensibly the plan’s focus, will ultimately judge its success by whether their incomes rise and lives improve by 2030. And that bet faces long odds.