Archive for Programming

OpenAI has deleted the word ‘safely’ from its mission – and its new structure is a test for whether AI serves society or shareholders

By Alnoor Ebrahim, Tufts University 

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

Tracing OpenAI’s origins

OpenAI, which also makes the Sora video artificial intelligence app, was founded as a nonprofit scientific research lab in 2015. Its original purpose was to benefit society by making its findings public and royalty-free rather than to make money.

To raise the money that developing its AI models would require, OpenAI, under the leadership of CEO Sam Altman, created a for-profit subsidiary in 2019. Microsoft initially invested US$1 billion in this venture; by 2024 that sum had topped $13 billion.

In exchange, Microsoft was promised a portion of future profits, capped at 100 times its initial investment. But the software giant didn’t get a seat on OpenAI’s nonprofit board – meaning it lacked the power to help steer the AI venture it was funding.

A subsequent round of funding in late 2024, which raised $6.6 billion from multiple investors, came with a catch: that the funding would become debt unless OpenAI converted to a more traditional for-profit business in which investors could own shares, without any caps on profits, and possibly occupy board seats.

Establishing a new structure

In October 2025, OpenAI reached an agreement with the attorneys general of California and Delaware to become a more traditional for-profit company.

Under the new arrangement, OpenAI was split into two entities: a nonprofit foundation and a for-profit business.

The restructured nonprofit, the OpenAI Foundation, owns about one-fourth of the stock in a new for-profit public benefit corporation, the OpenAI Group. Both are headquartered in California but incorporated in Delaware.

A public benefit corporation is a business that must consider interests beyond shareholders, such as those of society and the environment, and it must issue an annual benefit report to its shareholders and the public. However, it is up to the board to decide how to weigh those interests and what to report in terms of the benefits and harms caused by the company.

The new structure is described in a signed in October 2025 by OpenAI and the California attorney general, and endorsed by the Delaware attorney general.

Many business media outlets heralded the move, predicting that it would usher in more investment. Two months later, SoftBank, a Japanese conglomerate, finalized a $41 billion investment in OpenAI.

Changing its mission statement

Most charities must file forms annually with the Internal Revenue Service with details about their missions, activities and financial status to show that they qualify for tax-exempt status. Because the IRS makes the forms public, they have become a way for nonprofits to signal their missions to the world.

In its forms for 2022, , OpenAI said its mission was “to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return.”

This is the top of the front page of the 2023 990 form for OpenAI, with its mission stated at the bottom of the screenshot.
OpenAI’s mission statement as of 2023 included the word ‘safely.’
IRS via Candid

That mission statement has changed, as of – which the company filed with the IRS in late 2025. It became “to ensure that artificial general intelligence benefits all of humanity.”

This is the top of the front page of the 2024 990 form for OpenAI, with its mission stated at the bottom of the screenshot.
OpenAI’s mission statement as of 2024 no longer included the word ‘safely.’
IRS via Candid

OpenAI had dropped its commitment to safety from its mission statement – along with a commitment to being “unconstrained” by a need to make money for investors. According to Platformer, a tech media outlet, it has also disbanded its “mission alignment” team.

In my view, these changes explicitly signal that OpenAI is making its profits a higher priority than the safety of its products.

To be sure, OpenAI continues to mention safety when it discusses its mission. “We view this mission as the most important challenge of our time,” it states on its website. “It requires simultaneously advancing AI’s capability, safety, and positive impact in the world.”

Revising its legal governance structure

Nonprofit boards are responsible for key decisions and upholding their organization’s mission.

Unlike private companies, board members of tax-exempt charitable nonprofits cannot personally enrich themselves by taking a share of earnings. In cases where a nonprofit owns a for-profit business, as OpenAI did with its previous structure, investors can take a cut of profits – but they typically do not get a seat on the board or have an opportunity to elect board members, because that would be seen as a conflict of interest.

The OpenAI Foundation now has a 26% stake in OpenAI Group. In effect, that means that the nonprofit board has given up nearly three-quarters of its control over the company. Software giant Microsoft owns a slightly larger stake – 27% of OpenAI’s stock – due to its $13.8 billion investment in the AI company to date. OpenAI’s employees and its other investors own the rest of the shares.

Seeking more investment

The main goal of OpenAI’s restructuring, which it called a “recapitalization,” was to attract more private investment in the race for AI dominance.

It has already succeeded on that front.

As of early February 2026, the company was in talks with SoftBank for an additional $30 billion and stands to get up to a total of $60 billion from Amazon, Nvidia and Microsoft combined.

OpenAI is now valued at over $500 billion, up from $300 billion in March 2025. The new structure also paves the way for an eventual initial public offering, which, if it happens, would not only help the company raise more capital through stock markets but would also increase the pressure to make money for its shareholders.

OpenAI says the foundation’s endowment is worth about $130 billion.

Those numbers are only estimates because OpenAI is a privately held company without publicly traded shares. That means these figures are based on market value estimates rather than any objective evidence, such as market capitalization.

When he announced the new structure, California Attorney General Rob Bonta said, “We secured concessions that ensure charitable assets are used for their intended purpose.” He also predicted that “safety will be prioritized” and said the “top priority is, and always will be, protecting our kids.”

Steps that might help keep people safe

At the same time, several conditions in the OpenAI restructuring memo are designed to promote safety, including:

  1. A safety and security committee on the OpenAI Foundation board has the authority to that could potentially include the halting of a release of new OpenAI products based on assessments of their risks.
  2. The for-profit OpenAI Group has its own board, which must consider only OpenAI’s mission – rather than financial issues – regarding safety and security issues.
  3. The OpenAI Foundation’s nonprofit board gets to appoint all members of the OpenAI Group’s for-profit board.

But given that neither the mission of the foundation nor of the OpenAI group explicitly alludes to safety, it will be hard to hold their boards accountable for it.

Furthermore, since all but one board member currently serve on both boards, it is hard to see how they might oversee themselves. And doesn’t indicate whether he was aware of the removal of any reference to safety from the mission statement.

Identifying other paths OpenAI could have taken

There are alternative models that I believe would serve the public interest better than this one.

When Health Net, a California nonprofit health maintenance organization, converted to a for-profit insurance company in 1992, regulators required that 80% of its equity be transferred to another nonprofit health foundation. Unlike with OpenAI, the foundation had majority control after the transformation.

A coalition of California nonprofits has argued that the attorney general should require OpenAI to transfer all of its assets to an independent nonprofit.

Another example is The Philadelphia Inquirer. The Pennsylvania newspaper became a for-profit public benefit corporation in 2016. It belongs to the Lenfest Institute, a nonprofit.

This structure allows Philadelphia’s biggest newspaper to attract investment without compromising its purpose – journalism serving the needs of its local communities. It’s become a model for potentially transforming the local news industry.

At this point, I believe that the public bears the burden of two governance failures. One is that OpenAI’s board has apparently abandoned its mission of safety. And the other is that the attorneys general of California and Delaware have let that happen.The Conversation

About the Author:

Alnoor Ebrahim, Professor of International Business, The Fletcher School & Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Probability underlies much of the modern world – an engineering professor explains how it actually works

By Zachary del Rosario, Olin College of Engineering 

Probability underpins AI, cryptography and statistics. However, as the philosopher Bertrand Russell said, “Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.”

I teach statistics to engineers, so I know that while probability is important, it is counterintuitive.

Probability is a branch of mathematics that describes randomness. When scientists describe randomness, they’re describing chance events – like a coin flip – not strange occurrences, like a person dressed as a zebra. While scientists do not have a way to predict strange occurrences, probability does predict long-run behavior – that is, the trends that emerge from many repeated events.

Left: A person in a zebra costume. Right: A coin in mid-air after being flipped. A hand is visible with thumb extended upward.
We may say ‘random’ to describe strange occurrences (person dressed as zebra), but probability describes chance events (a coin flip).
Zebras in La Paz, Bolivia by EEJCC, Own Work CC A-SA 4.0; https://commons.wikimedia.org/wiki/File:Zebra_La_Paz.jpg _ , CC BY-SA

Modeling with probability

Since probability is about events, a scientist must choose which events to study. This choice defines the sample space. When flipping a coin, for example, you might define your event as the way it lands.

Coins almost always land on heads or tails. However, it’s possible – if very unlikely – for a coin to land on its side. So to create a sample space, you’d have two choices: heads and tails, or heads, tails and side. For now, ignore the side landings and use heads and tails as our sample space.

Next, you would assign probabilities to the events. Probability describes the rate of occurrence of an event and takes values between 0% and 100%. For example, a fair flip will tend to land 50% heads up and 50% tails up.

To assign probabilities, however, you need to think carefully about the scenario. What if the person flipping the coin is a cheater? There’s a sneaky technique to “wobble” the coin without flipping, controlling the outcome. Even if you can prevent cheating, real coin flips are slightly more probable to land on their starting face – so if you start the flip with the coin heads up, it’s very slightly more likely to land heads up.

In both the cheating and real flip cases, you need an appropriate sample space: starting face and other face. To have a fair flip in the real world, you’d need an additional step where you randomly – with equal probability – choose the starting face, then flip the coin.

Three bar graphs displaying probabilities for different outcomes. The 'Fair' Flip assigns equal probability (50%) to both heads and tails. The Real Flip assigns 51% to the Starting Face and 49% to the Other Face. The Cheater's Flip assigns 100% to the Starting Face.
The probabilities for different coin-flipping scenarios.
Zachary del Rosario, CC BY-SA

These assumptions add up quickly. To have a fair flip, you had to ignore side landings, assume no one is cheating, and assume the starting face is evenly random. Together, these assumptions constitute a model for the coin flip with random outcomes. Probability tells us about the long-run behavior of a random model. In the case of the coin model, probability describes how many coins land on heads out of many flips.

But instead of using a random model, why not just solve the coin toss using physics? Actually, scientists have done just that, and the physics shows that slight changes in the speed of the flip determine whether it comes up heads or tails. This sensitivity makes a coin flip unpredictable, so a random model is a good one.

Frequency vs. probability

Probability differs from frequency, which is the rate of events in a sequence. For example, if you flip a coin eight times and get two heads, that’s a frequency of 25%. Even if the probability of flipping a coin and seeing heads is 50% over the long run, each short sequence of flips will come out different. Four heads and four tails is the most probable outcome from eight flips, but other events can – and will – happen.

Frequency and probability are the same in one special setting: when the number of data points goes to infinity. In this sense, probability tells us about long-run behavior.

A bar chart of probabilities for all possible outcomes of eight 'fair' coin flips. Four heads has the highest probability (~27%), and the distribution is symmetric around four heads.
Probabilities for all possible outcomes of eight ‘fair’ coin flips.
Zachary del Rosario, CC BY-SA

Applications to AI, cryptography and statistics

Probability isn’t just useful for predicting coin flips. It underlies many modern technological systems.

For example, AI systems such as large language models, or LLMs, are based on next-word prediction. Essentially, they compute a probability for the words that follow your prompt. For example, with the prompt “New York” you might get “City” or “State” as the predicted next word, because in the training data those are the words that most frequently follow.

But since probability describes randomness, the outputs of a LLM are random. Just like a sequence of coin flips is not guaranteed to come out the same way every time, if you ask an LLM the same question again, you will tend to get a different response. Effectively, each next word is treated like a new coin flip.

Randomness is also key to cryptography: the science of securing information. Cryptographic communication uses a shared secret, such as a password, to secure information. However, surprising randomness isn’t good enough for security, which is why picking a surprising word is a bad choice of password. A shared secret is only secure if it’s hard to guess. Even if a word is surprising, real words are easier to guess than flipping a “coin” for each letter.

You can make a much stronger password by using probability to choose characters at random on your keyboard – or better yet, use a password manager.

Finally, randomness is key in statistics. Statisticians are responsible for designing and analyzing studies to make use of limited data. This practice is especially important when studying medical treatments, because every data point represents a person’s life.

The gold standard is a randomized controlled trial. Participants are assigned to receive the new treatment or the current standard of care based on a fair coin flip. It may seem strange to do this assignment randomly – using coin flips to make decisions about lives. However, the unpredictability serves an important role, as it ensures that nothing about the person affects their chance to get the treatment: not age, gender, race, income or any other factor. The unpredictability helps scientists ensure that only the treatment causes the observed result and not any other factor.

So what does probability mean? Like any kind of math, it’s only a model, meaning it can’t perfectly describe the world. In the examples discussed, probability is useful for describing long-term behaviors and using unpredictability to solve practical problems.The Conversation

About the Author:

Zachary del Rosario, Assistant Professor of Engineering, Olin College of Engineering

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI Agent Firm With Payments Technology Expands Into Texas, New Jersey

Source: Streetwise Reports (2/17/26)

The FUTR Corp. (FTRC:TSX; FTRCF:OTC) announces it has signed agreements with four new dealerships, expanding the reach of its FUTR Payments product into Houston, Texas, and further strengthening its presence in New Jersey. Find out why one analyst says the company is uniquely positioned to take advantage of this “pivotal moment” in the history of AI.

The FUTR Corp. (FTRC:TSX; FTRCF:OTC) announced that it has signed agreements with four new auto dealerships, expanding the reach of its FUTR Payments product into Houston, Texas, and further strengthening its presence in New Jersey, according to a February 17 release from the company.

FUTR is the creator of the FUTR Agent App, which allows users to store, manage, access, and monetize their personal information and make real-time payments.

AI agents were at the center of what “is shaping up to be one of the most audacious branding plays in the history of the internet,” a new US$70 million mega deal for the domain name AI.com announced during last weekend’s Super Bowl.

The signed agreements mark FUTR Payments’ first dealer relationship in Texas and are the initial results of the company’s newly deployed sales resources aimed at geographic expansion and reinforcing its presence in existing markets. With these agreements, FUTR Payments’ U.S. footprint now includes Texas, New York, New Jersey, Delaware, Florida, Iowa, and Connecticut.

These initial dealership partnerships are expected to serve as reference points as FUTR Payments continues to grow its presence in U.S. regional markets. The company said it plans to provide future updates as more dealerships, dealer groups, and regions join the platform.

“Expanding our footprint in New Jersey and entering the Texas market represent an important inflection point for FUTR Payments as we scale our platform across major U.S. markets,” said FUTR Payments Chief Business Officer Mindy Bruns. “These early dealership partnerships validate the demand we’re seeing for intelligent payment infrastructure that helps consumers build financial security while enabling dealers to engage customers in more durable, data-driven ways. We believe this expansion is an early indicator of the broader national opportunity ahead.”

Texas is one of the largest and most diverse automotive markets in the United States, with a significant concentration of independent and used-vehicle dealerships. Trailer Wheel & Frame Co., the company’s first Houston dealer, introduces a new asset category for FUTR Payments, expanding the applicability of its intelligent payment rails beyond traditional automotive inventory. The newly signed agreement with Speedway Motors LLC in Paterson, N.J., further expands the company’s presence in New Jersey by adding three more storefronts.

FUTR Payments is part of FUTR’s broader strategy, which combines intelligent payment infrastructure with consented consumer data and AI-enabled Agents.

Analyst: A ‘Pivotal Moment’ in AI Marketing

According to an updated research note by Research Capital Corp. Analyst Greg McLeish on February 11, “This year’s Super Bowl marked a pivotal moment in how artificial intelligence is being marketed to consumers.”

AI-related advertising has emerged as a key theme, illustrating how quickly AI has transitioned from an abstract concept to a mainstream consumer offering, Business Insider reported. The focus has shifted from chatbots to utility-driven AI systems that can perform tasks on behalf of users. In this context, Crypto.com’s launch of AI.com garnered significant attention and traffic, highlighting the growing interest in “AI agents that do things,” rather than systems that merely respond to prompts. Additionally, OpenClaw’s viral success in late January provided further validation, showing that autonomous, task-executing agents are gaining traction across both consumer platforms and developer communities. These developments indicate that AI agents are becoming mainstream as everyday utilities, rather than novelty tools.

“Crypto.com’s Super Bowl debut of AI.com reinforced that AI agents are moving decisively beyond experimental chat interfaces into mass-market, action-oriented tools,” McLeish wrote. “By committing roughly US$70 million for the AI.com domain and positioning its product as a ‘private AI agent,’ the company highlighted a shift toward autonomous digital assistants capable of managing schedules, automating workflows, and completing tasks on behalf of users. Post-game traffic reportedly overwhelmed early infrastructure, reinforcing strong initial engagement and signaling that consumer adoption is increasingly driven by execution rather than conversation.”

While recent agent launches validate the rising demand for autonomous AI, many early entrants lack the infrastructure needed for durable, real-world deployment, the analyst said.

“The FUTR Corp. is differentiated by anchoring its AI Agent in a SOC 2–compliant digital vault and embedding it directly into regulated financial workflows,” McLeish wrote. “FUTR’s platform combines compliance-grade data infrastructure, live banking and payment rails, and enterprise integrations that allow an agent not only to recommend actions, but to execute them securely across payments, credit, insurance, and home finance. Unlike consumer-facing agents that rely on generic cloud access, or open-source solutions that place security and operational burdens on the user, FUTR’s agent operates within institutional guardrails and is distributed through enterprise partnerships.”

McLeish continued, “In our view, this positions FUTR as an AI-native financial infrastructure layer, rather than a chatbot alternative, as agents move from novelty to necessity.”

Research Capital Corp. maintained its Speculative Buy rating on the stock with a CA$3 target price, a 991% return based on a sum-of-the-parts valuation taken at the time of writing.

“The result is a high-conviction opportunity at the intersection of consumer data, tokenized incentives, and privacy-first infrastructure,” McLeish said.

Most Expensive Domain Purchase in History

The US$70 million acquisition of AI.com by Crypto.com founder Kris Marszalek marks the most expensive domain purchase in history, paid entirely in cryptocurrency to an undisclosed seller, as reported by the Financial Times and covered by Connie Loizos for TechCrunch on February 8. This transaction sets a new standard in domain sales, surpassing previous record holders such as CarInsurance.com at US$49.7 million (2010), VacationRentals.com at US$35 million (2007), and Voice.com at US$30 million (2019).

In a letter to shareholders following the announcement, Alex McDougall, CEO of The FUTR Corp. (FTRC:TSX; FTRCF:OTC), stated that this acquisition “officially marks the beginning of functional AI Agents going mainstream.”

McDougall expressed the belief that this will become “the largest category the world has ever seen and as foundational as the advent of the internet.” He emphasized that FUTR is “ideally positioned to be at the front of the wave that is here.”

McDougall outlined the company’s progress: “We have set up the infrastructure in Q2, built the technology stack through Q3, signed the first wave of commercial partnerships through Q4, and now in Q1 it’s coming to market and the timing couldn’t be better” for FUTR’s agent. The company’s agent offers significant real-world utility, such as rewarding users for taking a picture of their property tax slip, knowing when those taxes are due, helping reserve cash flow in the budget to pay the tax, reminding users 15 days before the due date, comparing property taxes to other neighborhoods and home values, making the payment, and even reporting the payment to credit bureaus to maximize credit scores.

“That’s deep real-world utility,” McDougall said.

AI Agents Tailored to Your Data

According to FUTR, the AI agent within their app is not only easily accessible but also operates under your guidance, tailored specifically to your data. This AI is designed for individuals and works exclusively for you around the clock to accomplish your tasks. It integrates data from various sources and smoothly handles complex financial queries and services. “Chat GPT can find you information. It can order you food. It can do things in your browser,” McDougall told Streetwise Reports. But “FUTR can take your insurance policy, tell you where it’s good, where it’s bad, and find you a better one from our curated brand partners. If you put your mortgage into it, FUTR can read it, learn about it, tell you what clauses are suspect, find a better payment schedule for you, and then actually connect to payment rails and make those payments for you to take that intelligence and turn it into real action.”

For renters, FUTR could track when to renew your lease and report your rent payments to the credit bureau to help build your credit. “That’s really a key differentiator,” McDougall said. “There are a lot of AI agents that can tell you things. FUTR can go and do things for you.”

In a unique feature, FUTR tokens, created by the FUTR Foundation on the BASE Blockchain that powers the FUTR ecosystem, reward consumers and enterprises with tokens for sharing data, which they can use to purchase goods and services from FUTR brand partners. “Brands can purchase FUTR tokens or earn them from consumers and use those tokens to pay for leads from FUTR,” the company stated on its website.

According to the company, upcoming catalysts that could impact the stock price include the broad launch of the FUTR AI Agent App and FUTR Token sometime this quarter. The company also plans to introduce a FUTR Visa card.

The Catalyst: ‘Your Person For Everything’

According to the company, FUTR’s agent “can be your person for everything,” as McDougall explained. Unlike Chat GPT, which is designed for billions and processes data in a generalized way, the FUTR AI creates a personalized AI stack for each user, which the company refers to as “high fidelity AI.” A key advantage is that instead of your data being monetized without your knowledge, “every piece of data that goes into this agent and into this engine, you’re getting paid for it,” he said.

AI-powered shopping, with agents like FUTR’s acting on our behalf, signifies a major shift in the marketplace, according to a report by McKinsey & Co. This development points to a future where AI anticipates consumer needs, explores shopping options, negotiates deals, and completes transactions, all aligned with human intentions but operating independently through multistep processes enabled by reasoning models.

“This isn’t just an evolution of e-commerce,” the report stated. “It’s a rethinking of shopping itself in which the boundaries between platforms, services, and experiences give way to an integrated intent-driven flow, through highly personalized consumer journeys that deliver a fast, frictionless outcome.”

Streetwise Ownership Overview*

Insiders and Management: 23%
Share Structure as of 2/3/2026

By 2030, the U.S. B2C retail market alone could see up to US$1 trillion in orchestrated revenue from agentic commerce, with global estimates ranging from US$3 trillion to US$5 trillion, according to McKinsey research. This trend is expected to have an impact comparable to previous web and mobile-commerce revolutions, but it could progress even more rapidly since agents can navigate the same digital paths to purchase as humans, effectively “riding on the rails” established by these earlier transformations, researchers noted.

“This presents both benefits and risks for today’s commerce ecosystem,” McKinsey explained. “All kinds of businesses — brands, retailers, marketplaces, logistics and commerce services providers, and payments players — will need to adapt to the new paradigm and successfully navigate the challenges of trust, risk, and innovation.”

Ownership and Share Structure1

Approximately 23% of the company is owned by management and insiders. The remainder is held by retail investors.

Top shareholders include G. Scott Paterson with 8.38%, Melrose Ventures LLC with 2.08%, Michael Hillmer with 0.74%, Ashish Kapoor with 0.55%, and Jason G. Ewart with 0.52%.

The company’s market cap on February 12 was CA$35.1 million with 125.36 million shares outstanding. It trades within a 52-week range of CA$0.09 and CA$0.42.


Important Disclosures:

  1. The FUTR Corp. is a billboard sponsor of Streetwise Reports and pays SWR a monthly sponsorship fee between US$3,000 and US$6,000.
  2. As of the date of this article, officers, contractors, shareholders, and/or employees of Streetwise Reports LLC (including members of their household) own securities of The FUTR Corp.
  3. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  4. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports’ terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

1. Ownership and Share Structure Information

The information listed above was updated on the date this article was published and was compiled from information from the company and various other data providers.

Data centers told to pitch in as storms and cold weather boost power demand

By Nikki Luke, University of Tennessee and Conor Harrison, University of South Carolina 

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.The Conversation

About the Author:

Nikki Luke, Assistant Professor of Human Geography, University of Tennessee and Conor Harrison, Associate Professor of Economic Geography, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

By Domenico Vicinanza, Anglia Ruskin University 

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing.

These faster chips enable greater computing power by allowing devices to perform tasks more efficiently. As a result, we saw scientific simulations improving, weather forecasts becoming more accurate, graphics more realistic, and later, machine learning systems being developed and flourishing. It looked as if computing power itself obeyed a natural law.

This phenomenon became known as Moore’s Law, after the businessman and scientist Gordon Moore. Moore’s Law summarised the empirical observation that the number of transistors on a chip approximately doubled every couple of years. This also allows the size of devices to shrink, so it drives miniaturisation.

That sense of certainty and predictability has now gone, and not because innovation has stopped, but because the physical assumptions that once underpinned it no longer hold.

So what replaces the old model of automatic speed increases? The answer is not a single breakthrough, but several overlapping strategies.

One involves new materials and transistor designs. Engineers are refining how transistors are built to reduce wasted energy and unwanted electrical leakage. These changes deliver smaller, more incremental improvements than in the past, but they help keep power use under control.

Another approach is changing how chips are physically organised. Rather than placing all components on a single flat surface, modern chips increasingly stack parts on top of each other or arrange them more closely. This reduces the distance that data has to travel, saving both time and energy.

Perhaps the most important shift is specialisation. Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is.

Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity.

These are not general-purpose computers, and they are unlikely to replace conventional machines. Their potential lies in very specific areas, such as certain optimisation or simulation problems where classical computers can struggle to explore large numbers of possible solutions efficiently. In practice, these technologies are best understood as specialised co-processors, used selectively and in combination with traditional systems.

For most everyday computing tasks, improvements in conventional processors, memory systems and software design will continue to matter far more than these experimental approaches.

For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

New technologies

At the Supercomputing SC25 conference in St Louis, hybrid systems that mix CPUs (processors) and GPUs (graphics processing units) with emerging technologies such as quantum or photonic processors were increasingly presented and discussed as practical extensions of classical computing. For most everyday tasks, improvements in classical processors, memories and software will continue to deliver the biggest gains.

But there is growing interest in using quantum and photonic devices as co-
processors, not replacements. Their appeal lies in tackling specific classes of
problems, such as complex optimisation or routing tasks, where finding low-energy
or near-optimal solutions can be exponentially expensive for classical machines
alone.

In this supporting role, they offer a credible way to combine the reliability of
classical computing with new computational techniques that expand what these
systems can do.

Life after Moore’s Law is not a story of decline, but one that requires constant
transformation and evolution. Computing progress now depends on architectural
specialisation, careful energy management, and software that is deeply aware of
hardware constraints. The danger lies in confusing complexity with inevitability, or marketing narratives with solved problems.

The post-Moore era forces a more honest relationship with computation where performance is not anymore something we inherit automatically from smaller transistors, but it is something we must design, justify, and pay for, in energy, in complexity, and in trade-offs.The Conversation

About the Author:

Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI-induced cultural stagnation is no longer speculation − it’s already happening

By Ahmed Elgammal, Rutgers University 

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings.
Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.The Conversation

About the Author:

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Companies are already using agentic AI to make decisions, but governance is lagging behind

By Murugan Anandarajan, Drexel University 

Businesses are acting fast to adopt agentic AI – artificial intelligence systems that work without human guidance – but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.

I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI & Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren’t just pilot projects or one-off tests. They’re part of regular workflows.

At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively.

In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved.

This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene.

For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave “as designed,” unexpected conditions can lead to undesirable outcomes.

This raises a big question: When something goes wrong with AI, who is responsible – and who can intervene?

Why governance matters

When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn’t with the technology itself – it’s working as it was designed – but with accountability. Research on human-AI governance shows that problems happen when organizations don’t clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in.

Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do.

When humans enter the loop too late

In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible – when a price looks wrong, a transaction is flagged or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory.

Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear.

Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers.

How governance determines who moves ahead

Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk.

Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems.

This slowdown doesn’t have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn’t ambition or technical skills, but being prepared.

Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD – the Organization for Economic Cooperation and Development – emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later.

Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back.

The next advantage is smarter governance

The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight and intervention from the start.

In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.The Conversation

About the Author:

Murugan Anandarajan, Professor of Decision Sciences and Management Information Systems, Drexel University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Despite its steep environmental costs, AI might also help save the planet

By Nir Kshetri, University of North Carolina – Greensboro 

The rapid growth of artificial intelligence has sharply increased electricity and water consumption, raising concerns about the technology’s environmental footprint and carbon emissions. But the story is more complicated than that.

I study emerging technologies and how their development and deployment influence economic, institutional and societal outcomes, including environmental sustainability. From my research, I see that even as AI uses a lot of energy, it can also make systems cleaner and smarter.

AI is already helping to save energy and water, cut emissions and make businesses more efficient in agriculture, data centers, the energy industry, building heating and cooling, and aviation.

Agriculture

Agriculture is responsible for nearly 70% of the world’s freshwater use, and competition for water is growing.

AI is helping farmers use water more efficiently. Argentinian climate tech startup Kilimo, for example, tackles water scarcity with an AI-powered irrigation platform. The software uses large amounts of data, machine learning, and weather and satellite measurements to determine when and how much to water which areas of fields, ensuring that only the plants that actually need water receive it.

Chile’s Ministry of Agriculture has found that in that country’s Biobío region, farms using Kilimo’s precision irrigation systems have reduced water use by up to 30% while avoiding overirrigation. Using less water also reduces the amount of energy needed to pump it from the ground and around a farm.

Kilimo is one example that shows how AI can create economic incentives for sustainability: The amount of water farmers save from precision irrigation is verified, and credits for those savings are sold to local companies that want to offset some of their water use. The farmers then earn a profit – often 20% to 40% above their initial investment.

Data centers

U.S. data centers consumed about 176 terawatt-hours of electricity in 2023, accounting for roughly 4.4% of total U.S. electricity use. This number increased to 183 TWh in 2024. This growing energy footprint has made improving data center efficiency a critical priority for the operators of the data centers themselves, as well as the companies that rely on them – including cloud providers, tech firms and large enterprises running AI workloads – both to reduce costs and meet sustainability and regulatory goals.

AI is helping data centers become more efficient. The number of global internet users grew from 1.9 billion in 2010 to 5.6 billion in 2025. Global internet traffic surged from 20.2 exabytes per month in 2010 to 521.9 exabytes per month in 2025 – a more than 25-fold increase.

Despite the surge in internet traffic and users, data center electricity consumption has grown more moderately, rising from 1% of global electricity use in 2010 to 2% in 2025. Much of this is thanks to efficiency gains, including those enabled by AI.

AI systems analyze operational data in data centers – including workloads, temperature, cooling efficiency and energy use – to spot energy-hungry tasks. It adjusts computing resources to match demand and optimizes cooling. This lets data centers run smoothly without wasting electricity.

At Microsoft, AI is improving energy efficiency by using predictive analytics to schedule computing tasks. This lets servers enter low-power modes during periods of low demand, saving electricity during slower times. Meta uses AI to control cooling and airflow in its data centers. The systems stay safe while using less energy than they might otherwise.

In Frankfurt, Germany, Equinix uses AI to manage cooling and adjust energy use at its data center based on real-time weather. This improved operational efficiency by 9%, The New York Times reported.

Energy and fuels

Energy companies are using AI to boost efficiency and cut emissions. They deploy drones with cameras to inspect pipelines. AI systems analyze the images to more quickly detect corrosion, cracks, dents and leaks, which allows problems to be addressed before they escalate, improving overall safety and reliability.

Shell has AI systems that monitor methane emissions from its facilities by analyzing methane concentrations and wind data, such as speed and direction. This helps the system track how methane disperses, enabling it to pinpoint emission sources and optimize energy use. By identifying the largest leaks quickly, the system allows targeted maintenance and operational adjustments to further reduce emissions. Using that technology, the company says it aims to nearly eliminate methane leaks by 2030.

AI could speed up innovation in clean energy by improving solar panels, batteries and carbon-capture systems. In the longer term, it could enable major breakthroughs, including advanced biofuels or even usable nuclear fusion, while helping track and manage carbon-absorbing resources such as forests, wetlands and carbon storage facilities.

Shell uses AI across its operations to cut emissions. Its process optimizer for liquefied natural gas analyzes sensor data to find more efficient equipment settings, boosting energy efficiency and reducing emissions.

Buildings and district heating

The energy needed to heat, cool and power buildings is responsible for roughly 28% of total global emissions. AI initiatives are starting to reduce building emissions through smart management and predictive optimization.

In downtown Copenhagen, for instance, the local utility company HOFOR deployed thousands of sensors tracking temperatures, humidity and building energy flows. The system uses information about each building to forecast heating needs 24 hours in advance and automatically adjust supply to match demand.

The Copenhagen system was first piloted in schools and multifamily housing, with support from the Nordic Smart City Network and climate-innovation grants. It has since expanded to dozens of sites. Results were clear: Across participating buildings, energy use fell 15% to 25%, peak heating demand dropped by up to 30%, and carbon dioxide emissions decreased by around 10,000 tonnes per year.

AI can also help households and offices save energy. Smart home systems optimize heating, cooling and appliance use. Researchers at the Lawrence Berkeley National Laboratory found that by adopting AI, medium-sized office buildings in the U.S. could reduce energy use by 21% and cut carbon dioxide emissions by 35%.

Aviation

About 2% of all human-caused carbon dioxide emissions in 2023 came from aviation, which emitted about 882 megatons of carbon dioxide.

Contrails, the thin ice clouds formed when aircraft exhaust freezes at cruising altitudes, contribute more than one-third of aviation’s overall warming effect by trapping heat in the atmosphere. AI can optimize flight routes and altitudes in real time to reduce contrail formation by avoiding areas where the air is more humid and therefore more likely to produce contrails.

Airlines have also used AI to improve fuel efficiency. In 2023, Alaska Airlines used 1.2 million gallons less fuel by using AI to analyze weather, wind, turbulence, airspace restrictions and traffic to recommend the most efficient routes, saving around 5% on fuel and emissions for longer flights.

In short, AI affects the environment in both positive and negative ways. Already, it has helped industries cut energy use, lower emissions and use water more efficiently. Expanding these solutions could drive a cleaner, more sustainable planet.The Conversation

About the Author:

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Google’s proposed data center in orbit will face issues with space debris in an already crowded orbit

By Mojtaba Akhavan-Tafti, University of Michigan 

The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, medium-sized data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city.

Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers.

In space, sunshine – which solar panels can convert into electricity – is abundant and reliable. On Nov. 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So, instead of beaming power back to Earth, the constellation would beam data back to Earth.

For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space.

As a technology entrepreneur, I applaud Google’s ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris.

The mathematics of disaster

Space debris – the collection of defunct human-made objects in Earth’s orbit – is already affecting space agencies, companies and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites.

Space debris travels at hypersonic speeds of approximately 17,500 miles per hour (28,000 km/h) in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil.

Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceX’s Starlink. The Starlink network has more than 7,500 satellites, which provide global high-speed internet.

The U.S. Space Force actively tracks over 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to reliably identify and track.

In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the United States and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station.

The orbital shell Google’s project targets – a Sun-synchronous orbit approximately 400 miles (650 kilometers) above Earth – is a prime location for uninterrupted solar energy. At this orbit, the spacecraft’s solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, Sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris.

As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable.

Implications for Project Suncatcher

Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just one kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 miles per hour – while separated by gaps about the distance you need to safely brake on the highway.

This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to “think” and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototype satellites by early 2027 to validate the hardware.

But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites – this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind.

To add to this complexity, streams of particles and magnetic fields from the Sun – known as space weather – can cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag.

When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield.

The importance of active avoidance

To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with “reflexes” that autonomously detect and dance through a debris field. Suncatcher’s current design doesn’t include these active avoidance capabilities.

In the first six months of 2025 alone, SpaceX’s Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds.

Today’s object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time.

Equipping Suncatcher with active collision avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor.

Detecting space debris in orbit can help prevent collisions.

Paying rent for the orbit

Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the mission’s completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellite’s altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere.

However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use-tax for space debris removal.

A use-tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris removal missions, which capture and remove the most dangerous pieces of junk.

Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris removal programs can help keep low Earth orbit open for business.The Conversation

About the Author:

Mojtaba Akhavan-Tafti, Associate Research Scientist, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026

By Thomas Şerban von Davier, Carnegie Mellon University 

In artificial intelligence, 2025 marked a decisive shift. Systems once confined to research labs and prototypes began to appear as everyday tools. At the center of this transition was the rise of AI agents – AI systems that can use other software tools and act on their own.

While researchers have studied AI for more than 60 years, and the term “agent” has long been part of the field’s vocabulary, 2025 was the year the concept became concrete for developers and consumers alike.

AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots like ChatGPT.

In 2025, the definition of AI agent shifted from the academic framing of systems that perceive, reason and act to AI company Anthropic’s description of large language models that are capable of using software tools and taking autonomous action. While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling APIs, coordinating with other systems and completing tasks independently.

This shift did not happen overnight. A key inflection point came in late 2024, when Anthropic released the Model Context Protocol. The protocol allowed developers to connect large language models to external tools in a standardized way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents.

AI agents are a whole new ballgame compared with generative AI.

The milestones that defined 2025

The momentum accelerated quickly. In January, the release of Chinese model DeepSeek-R1 as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major U.S. labs such as OpenAI, Anthropic, Google and xAI released larger, high-performance models, while Chinese tech companies including Alibaba, Tencent, and DeepSeek expanded the open-model ecosystem to the point where the Chinese models have been downloaded more than American models.

Another turning point came in April, when Google introduced its Agent2Agent protocol. While Anthropic’s Model Context Protocol focused on how agents use tools, Agent2Agent addressed how agents communicate with each other. Crucially, the two protocols were designed to work together. Later in the year, both Anthropic and Google donated their protocols to the open-source software nonprofit Linux Foundation, cementing them as open standards rather than proprietary experiments.

These developments quickly found their way into consumer products. By mid-2025, “agentic browsers” began to appear. Tools such as Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas, Copilot in Microsoft’s Edge, ASI X Inc.’s Fellou, MainFunc.ai’s Genspark, Opera’s Opera Neon and others reframed the browser as an active participant rather than a passive interface. For example, rather than helping you search for vacation details, it plays a part in booking the vacation.

At the same time, workflow builders like n8n and Google’s Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents like Cursor and GitHub Copilot.

New power, new risks

As agents became more capable, their risks became harder to ignore. In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: By automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.

This tension defined much of 2025. AI agents expanded what individuals and organizations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight.

The business community is gearing up for multiagent systems.

What to watch for in 2026

Looking ahead, several open questions are likely to shape the next phase of AI agents.

One is benchmarks. Traditional benchmarks, which are like a structured exam with a series of questions and standardized scoring, work well for single models, but agents are composite systems made up of models, tools, memory and decision logic. Researchers increasingly want to evaluate not just outcomes, but processes. This would be like asking students to show their work, not just provide an answer.

Progress here will be critical for improving reliability and trust, and ensuring that an AI agent will perform the task at hand. One method is establishing clear definitions around AI agents and AI workflows. Organizations will need to map out exactly where AI will integrate into workflows or introduce new ones.

Another development to watch is governance. In late 2025, the Linux Foundation announced the creation of the Agentic AI Foundation, signaling an effort to establish shared standards and best practices. If successful, it could play a role like the World Wide Web Consortium in shaping an open, interoperable agent ecosystem.

There is also a growing debate over model size. While large, general-purpose models dominate headlines, smaller and more specialized models are often better suited to specific tasks. As agents become configurable consumer and business tools, whether through browsers or workflow management software, the power to choose the right model increasingly shifts to users rather than labs or corporations.

The challenges ahead

Despite the optimism, significant socio-technical challenges remain. Expanding data center infrastructure strains energy grids and affects local communities. In workplaces, agents raise concerns about automation, job displacement and surveillance.

From a security perspective, connecting models to tools and stacking agents together multiplies risks that are already unresolved in standalone large language models. Specifically, AI practitioners are addressing the dangers of indirect prompt injections, where prompts are hidden in open web spaces that are readable by AI agents and result in harmful or unintended actions.

Regulation is another unresolved issue. Compared with Europe and China, the United States has relatively limited oversight of algorithmic systems. As AI agents become embedded across digital life, questions about access, accountability and limits remain largely unanswered.

Meeting these challenges will require more than technical breakthroughs. It demands rigorous engineering practices, careful design and clear documentation of how systems work and fail. Only by treating AI agents as socio-technical systems rather than mere software components, I believe, can we build an AI ecosystem that is both innovative and safe.The Conversation

About the Author:

Thomas Şerban von Davier, Affiliated Faculty Member, Carnegie Mellon Institute for Strategy and Technology, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.