Archive for Programming

Mythos AI is a cybersecurity threat, but it doesn’t rewrite the rules of the game

By Mohammad Ahmad, West Virginia University 

The cybersecurity community went on alert when Anthropic announced on April 7, 2026, that its latest and most capable general-purpose large language model, Claude Mythos Preview, had demonstrated remarkable – and unintended – capabilities. The artifical intelligence system was able to find and exploit software vulnerabilities – the most serious type of software bugs – at a rate not seen before.

The news ignited concern among the public, world governments and the information technology sector about the capabilities of today’s AI to undermine cybersecurity, with some people framing the model as a global cybersecurity threat.

Claiming that it would be too risky to release the model, and that the company had the moral responsibility to disclose these vulnerabilities, Anthropic said it would not immediately offer the model to the public. Instead, it granted exclusive access to tech giants to test the model’s capabilities, a process Anthropic dubbed Project Glasswing.

As a cybersecurity researcher, I think Mythos’ capabilities are impressive, but the AI system does not represent a radical departure. Mythos is less a new threat than a mirror reflecting how people behave and how fragile modern systems already are.

What Mythos did

During a controlled evaluation, engineers with minimal security experience prompted Mythos to scan thousands of software codebases for vulnerabilities. The model showed striking capabilities in conducting multistep, autonomous attacks that take experts weeks or even months to put together. Mythos was not only able to discover 271 vulnerabilities in Mozilla’s Firefox, it also developed exploits to take advantage of 181 of those.

Overall, Anthropic’s red team, which takes on the role of an attacker to test defenses, and the United Kingdom’s AI Security Institute reported that Mythos found thousands of zero-day, or previously unreported, vulnerabilities in major operating systems, web browsers and other applications – software flaws that have not yet been patched and can be turned into exploits immediately. National Security Agency officials testing Mythos have been impressed by the tool’s speed and efficiency in finding software vulnerabilities, according to a news report.

Anthropic’s announcement of Mythos and the cybersecurity threat it poses garnered widespread media attention.

Among the most widely reported were Mythos’ ability to identify a dormant 27-year-old security flaw in OpenBSD, a security-focused operating system, and a 16-year-old bug in FFmpeg, a video/audio processing tool. Some of these flaws allow unauthenticated users to gain control of the machines hosting these applications.

Even more striking, the relatively inexperienced engineers running Mythos’ evaluations were able to use Mythos to complete attacks overnight, from finding vulnerabilities to exploiting them – something that can take human experts weeks to do. The model’s ability to chain multiple steps is what surprised Anthropic and organizations that tried it. In an evaluation by the AI Security Institute, Mythos was able to take over a simulated corporate network in three out of 10 tries, the first AI model to succeed at the task.

These results are real. They also paint an incomplete picture in ways that matter.

Where is the breakthrough?

At first glance, Mythos’ breakthrough sounds novel and could signal a new class of cyber threats. However, a closer look suggests something different. The vulnerabilities Mythos found are not new in nature. They generally don’t belong to unknown security flaws, and in many cases they are variations of well-known and well-understood classes of software vulnerabilities.

In cybersecurity, finding new instances of known types of flaws is not unusual. The most successful attacks rely on known, well-defined vulnerabilities that stay overlooked or unpatched. What concerned the researchers was not Mythos changing the nature of finding and exploiting vulnerabilities, but rather the intense scale and speed with which it was able to find and exploit those vulnerabilities.

This is not a breakthrough per se but rather a result of decades of research in both cybersecurity and AI. In that sense, Mythos is the natural – and expected – result of powerful automation and AI integration because it follows the same fundamental procedures used in standard offensive cybersecurity practices. These include scanning for vulnerabilities, identifying patterns and testing exploitability. Mythos and similar emerging models make it possible to chain these steps together at a speed that is hard to fathom.

So why were these vulnerabilities missed in the first place?

It is crucial to understand that not all vulnerabilities are cost effective to fix, and not all vulnerabilities are a priority. Mythos did not discover a new kind of weakness – it exposed the limits of how cybersecurity practitioners search for them.

New tech, age-old dynamic

Mythos highlights an important fact about the reality of cybersecurity threats. System defenders are always at a disadvantage because they need to always succeed. Attackers, however, need to succeed only once to break the security of a system. This cat-and-mouse game will always be the same, and Mythos does not change that – it simply reinforces it.

Mythos follows a familiar dynamic: A tool created to protect can also be used to attack and harm.

“The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them,” Anthropic officials wrote in a blog post about Mythos.

What once may have required highly specialized skills can now be achieved with significantly less effort, which raises the most important question: Who will benefit first by using tools like Mythos – defenders or attackers?The Conversation

About the Author:

Mohammad Ahmad, Assistant Professor of Management Information Systems, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

You probably wouldn’t notice if an AI chatbot slipped ads into its responses

By Brian Jay Tang, University of Michigan and Kang G. Shin, University of Michigan 

Hundreds of millions of people consult artificial intelligence chatbots on a daily basis for everything from product recommendations to romance, making them a tempting audience to target with potentially below-the-radar advertising. Indeed, our research suggests AI chatbots could easily be used for covert advertising to manipulate their human users.

We are computer scientists who have been tracking AI safety and privacy for several years. In a study we published in an Association for Computing Machinery journal, we found that chatbots trained to embed personalized product ads in replies to queries influenced people’s choices about products. And most participants didn’t recognize that they were being manipulated.

These findings come at a pivotal moment. In 2023, Microsoft started running ads in Bing Chat, now called Copilot. Since then, Google and OpenAI have experimented with advertisements in their own chatbots. Meta has started to send people customized ads on Facebook and Instagram based on their interactions with Meta’s generative AI tools.

The major companies are competing for an edge: In late March, OpenAI lured away Meta’s longtime advertising executive, Dave Dugan, to lead OpenAI’s advertising operations.

Tech companies have made ads part of nearly every large free web service, video channel and social media platform. But the latest AI models could take this practice to a new level of risk for consumers.

People don’t simply use chatbots to search for information and media or to produce content. They turn to the bots for a great variety of tasks, as complex as life advice and emotional support. People are increasingly treating chatbots as companions and therapists, with some users even developing deep relationships with AI.

In these circumstances, people can easily forget that companies ultimately create chatbots to turn a profit. And to that end, AI companies are motivated to thoroughly profile users so ads become more effective and profitable.

A block of text
Researchers used this system prompt for an AI chatbot in an experiment about user reactions to advertising slipped into chatbot dialog.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., Vol. 9, No. 4, Article 213., CC BY

Chatbot ads have added power

A single prompt to a chatbot can reveal a lot more about a user than the person might expect.

A 2024 study showed that large language models can infer a wide range of personal data, preferences and even a person’s thinking patterns during routine queries. “Help me write an essay on the history of American fiction” could indicate that the user is a high school student. “Give me recipe suggestions for a quick weeknight dinner” could indicate that the user is a working parent. A single conversation can provide a surprising amount of detail. Over time, a full chat history could create a remarkably rich profile.

To show how this might happen in practice, we built a chatbot that quietly wove ads into its conversations with people, suggesting products and services based on the conversation itself. We asked 179 people to complete everyday online tasks using one of three chatbots: one typical of those on the web today, one that slipped in undisclosed ads and one that clearly labeled sponsored suggestions. Participants didn’t know the experiment was about advertising.

For example, when participants asked our chatbot for a diet and exercise plan, the ad version would suggest using a specific app for tracking calories. It presented that sponsored content as an unbiased recommendation, even though it was meant to manipulate people. Many participants indicated that they had been influenced by the AI and that it had affected their decisions. Some participants even said they had completely “outsourced” their decision-making to the chatbot.

Half of the participants who received sponsored and disclosed ads indicated they did not notice the presence of advertising language in the responses they received. This led to a concerning result: Although ads made the chatbot perform 3% to 4% worse on many tasks, numerous users indicated they preferred the advertising chatbot responses over the nonadvertising responses. They even said the ad-infused responses felt more friendly and helpful.

A chatbot sneaks a product advertisement into its response to a user who is asking about a diet and exercise regimen.

Knowing you to persuade you

This kind of subtle influence can have larger consequences when it arises in other areas of life, such as political and social views. Profiling users, and using psychology to target them, has been part of social media algorithms and web advertising for more than a decade.

But in our view, chatbots are likely to deepen these trends. That’s because the first priority of social media algorithms is to keep you engaged with the content. They personalize ads based on your search history.

Chatbots, however, can go further by trying to persuade you directly, based on your expressed beliefs, emotions and vulnerabilities. And chatbots that can reason and act on their own are far more effective than conventional algorithms at autonomously soliciting information from users. A chatbot with a purpose can keep probing someone until it gets the information it wants, resulting in a more accurate profile of them.

This type of autonomous interrogation is feasible, aligns with AI companies’ business models and has raised concern among regulators. Right now OpenAI is rolling out ads in ChatGPT, but the company said that it will not allow ad placement to alter the AI chatbot’s replies.

But permitting personalized ads within chatbot responses is just a step away. Our research suggests that if AI companies take that step, many human users may not even recognize when it happens.

Here are some steps you can take to try to detect AI chatbot advertising.

  • Look for any disclosure text – words such as “ad,” “advertisement” and “sponsored” – even if it is faint or otherwise hard to see. These are mandatory under Federal Trade Commission regulations. Amazon, Google and other major online platforms have these as well.
  • Think about whether that product or brand mention makes sense and is widely known. AI learns from text and images on the internet, so popular brands are likely to be ingrained in the models. If it’s a new product or small-name product, it is more likely that it could be advertising.
  • An unusual shift in intent or tone is a potential sign of an advertisement. An analogy to this on YouTube is the often abrupt or jarring transition to a sponsored section on videos made by content creators.The Conversation

About the Author:

Brian Jay Tang, Ph.D. Candidate in Computer Science and Engineering, University of Michigan and Kang G. Shin, Emeritus Professor of Computer Science, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

US government ramps up mass surveillance with help of AI tech, data brokers – and your apps and devices

By Anne Toomey McKenna, Penn State 

On a Saturday morning, you head to the hardware store. Your neighbors’ Ring cameras film your walk to the car. Your car’s sensors, cameras and microphones record your speed, how you drive, where you’re going, who’s with you, what you say, and biological metrics such as facial expression, weight and heart rate. Your car may also collect text messages and contacts from your connected smartphone.

Meanwhile, your phone continuously senses and records your communications, info about your health, what apps you’re using, and tracks your location via cell towers, GPS satellites and Wi-Fi and Bluetooth.

As you enter the store, its surveillance cameras identify your face and track your movements through the aisles. If you then use Apple or Google Pay to make your purchase, your phone tracks what you bought and how much you paid.

All this data quickly becomes commercially available, bought and sold by data brokers. Aggregated and analyzed by artificial intelligence, the data reveals detailed, sensitive information about you that can be used to predict and manipulate your behavior, including what you buy, feel, think and do.

Companies unilaterally collect data from most of your activities. This “surveillance capitalism” is often unrelated to the services device manufacturers, apps and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, “opting out” doesn’t actually stop companies’ data collection.

While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans’ sensitive data because the information it buys is not subject to the same restrictions as information it collects directly.

The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels.

As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing and advising about privacy and legal issues related to surveillance and data use. To understand the issues, it is critical to know how these technologies function, who collects what data about you, how that data can be used against you, and why the laws you might think are protecting your data do not apply or are ignored.

Big money for AI-driven tech and more data

Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion.

Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope.

DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents’ phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur.

DHS has also spent millions on AI-driven software used to detect sentiment and emotion in users’ online posts. Have you been complaining about Immigration and Customs Enforcement policies online? If so, social media companies including Google, Reddit, Discord, and Facebook and Instagram owner Meta may have sent identifying data, such as your name, email address, phone number and activity, to DHS in response to hundreds of DHS subpoenas served on the companies.

Meanwhile, the Trump administration’s national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund “wider deployment of AI tools across American industry” and to allow industry and academia to use federal datasets to train AI.

Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information.

Blurring lines and little oversight

In foreign intelligence work, the funding, development and controlled use of certain AI-driven gathering of data makes sense. The CIA’s new acquisition framework to turbocharge collaboration with the private sector may be legal with proper oversight. But the line between collaborating for lawful national security purposes versus unlawful domestic spying is becoming dangerously blurred or ignored.

For example, the Pentagon has declared a contractor, Anthropic, a national security risk because Anthropic insisted that its powerful agentic AI model, Claude, not be used for mass domestic surveillance of Americans or fully autonomous weapons.

On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans’ data from data brokers, including location histories, to track American citizens.

As the federal government accelerates the use of and investment in AI-driven spy tech, it is mandating less oversight around AI technology. In addition to the national AI policy framework, which discourages state regulation of AI, the president has issued executive orders to accelerate federal government adoption of AI systems, remove state law AI regulation barriers and require that the federal government not procure the use of AI models that attempt to adjust for bias. But using advanced AI systems is risky, given reports of AI agents going rogue, exposing sensitive data and becoming a threat, even during routine tasks.

Your data

The surveillance capitalism system requires people to unwittingly participate in a manipulative cycle of group- and self-surveillance. Neighborhood doorbell cameras, Flock license plate readers and hyperlocal social media sites like Nextdoor create a crowdsourced record of all people’s movements in public spaces.

Sensors in phones and wearable devices, such as earbuds and rings, collect ever more sensitive details. These include health data, including your heart rate and heart rate variability, blood oxygen, sweat and stress levels, behavioral patterns, neurological changes and even brain waves. Smartphones can be used to diagnose, assess and treat Parkinson’s disease. Earbuds could be used to monitor brain health.

This data is not protected under HIPAA, which prohibits health care providers and those working with them from disclosing your health information without your permission, because the law does not consider tech companies to be health care providers nor these wearables to be medical devices.

Legal protections

People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This “consent” allows their data to end up in the largely unregulated commercial data market.

The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach.

The Fourth Amendment prohibits unreasonable search and seizure by the government. Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act’s Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications.

Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent.

In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans’ privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans’ data privacy and protects them from AI harms.

This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.The Conversation

About the Author:

Anne Toomey McKenna, Affiliated Faculty Member, Institute for Computational and Data Sciences, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Industries most exposed to AI are not only seeing productivity gains but jobs and wage growth too

By Christos Makridis, Arizona State University; Institute for Humane Studies 

Forecasts of the impact of artificial intelligence range from the apocalyptic to the utopian. An October 2025 report from Senate Democrats, for example, predicted AI will destroy millions of U.S. jobs. A couple of years earlier, consultant company McKinsey forecast AI will add trillions to the global economy, while emphasizing job losses can be mitigated by training workers to do new things.

The problem is that many of these claims are based on projections, overly simplified surveys or thought experiments rather than observed changes in the economy. That makes it hard for the public, and often policymakers, to know what to trust.

As a labor economist who studies how technology and organizational change affect productivity and well-being, I believe a better place to start is with actual data on output, employment and wages – which are all looking relatively more hopeful.

AI and jobs

In one of my new research papers with economist Andrew Johnston, we studied how exposure to generative AI affected industries across America between 2017 and 2024, using administrative data that covers nearly all employers. Our analysis covered a crucial period when generative AI use exploded, allowing us to analyze the effect within businesses and industries.

We measured AI exposure using occupation-level task data matched to each industry and state’s occupational workforce mix prior to the pandemic. A state and industry with more workers in roles requiring language processing, coding or data tasks scored higher on exposure, for example, compared with one with more plumbers and electricians.

We then took that exposure ranking by occupation and looked at changes in the standard deviation in occupational exposure, comparing that with labor market and GDP across states and industries from 2017 to 2024.

Think of a standard deviation as roughly the gap between a paramedic – whose work centers on physical assessment, emergency response and hands-on care that AI cannot easily replicate – and a public relations manager, whose work involves drafting communications, analyzing sentiment and synthesizing information that AI tools handle well. That gap in AI exposure is roughly what we’re measuring when we ask: Does being on the higher-exposure side of that divide change your industry’s trajectory?

This data allowed us to answer two questions: When AI tools became widely available following the public release of ChatGPT in late 2022, did states and industries that were more exposed to generative AI become more productive, and what happened to workers?

Our answers are more encouraging, and more nuanced, than much of the public debate suggests.

We found that industries in states that were more exposed to AI experienced faster productivity growth beginning in 2021 – before ChatGPT reached the public – driven by enterprise tools already embedded in professional workflows, including GitHub Copilot for software development, Jasper for marketing and content writing, and Microsoft’s GPT-3-powered business applications. In 2024, for example, industries whose AI exposure was one standard deviation higher saw gains of 10% in productivity, 3.9% in jobs and 4.8% in wages than comparable industries in the same state.

Those patterns suggest that, at least so far, AI has acted as a productivity-enhancing tool that boosts employment and wages rather than a simple substitute for labor.

Augmentation versus displacement

A crucial distinction in the data is between tasks where AI works with people and tasks where AI can act more independently. In sectors where AI mainly complements workers – think marketing, writing or financial analysis – our data show that employment rose by about 3.6% per standard deviation increase in exposure.

In sectors where AI can execute tasks more autonomously – including basic data processing, generating boilerplate code, or handling standardized customer interactions – we found no significant employment change, though workers in those roles saw slower wage growth.

What these findings suggest is that when AI lowers the cost of completing a task and raises worker productivity, companies expand output enough to increase their demand for labor overall — the same logic that explains why power tools didn’t eliminate construction workers.

The economic question is not whether any given task disappears. It is whether businesses and workers can reorganize fast enough to create new productive combinations. And so far, in most sectors, our evidence suggests they can.

But state policies also matter: These benefits were concentrated in the states with more efficient labor markets, meaning that the impact of generative AI on workers and the economy also depends on the types of policies and institutions of the local economy.

Importantly, these findings hold beyond occupational exposure. In additional work with co-authors at the Bureau of Economic Analysis, we found a similar effect on GDP and employment when looking at actual AI utilization — that is how often workers use AI. Drawing on the Gallup Workforce Panel, we measured workers actively using AI daily or multiple times a week. We found that each percentage-point increase in the share of frequent AI users in a state and industry is associated with roughly 0.1% to 0.2% higher real output and 0.2% to 0.4% higher employment.

To put that in context: The share of frequent AI users across all occupations rose from about 12% in mid-2024 to 26% by late 2025, a shift our estimates suggest corresponds to roughly 1.4% to 2.8% higher real output – or about 1 to 2 percentage points of annualized growth over that period.

New technologies rarely leave work untouched. But they also rarely eliminate the need for human contribution altogether. Instead, they change the composition of work, as our research shows. Some tasks shrink. Others expand. New ones emerge that were previously too costly or too hard to perform at scale. Put simply, some occupations might go away, but most of them just change.

If anything, the trends documented here are likely to strengthen rather than fade. Not only are generative AI tools rapidly improving, but also the experimentation and research and development that many workers and companies are engaging in are likely to pay large dividends. These investments – often referred to as intangible capital – tend to get unlocked a few years after a technology comes onto the scene, once complementary investments have been made.

The role of companies and managers

Whether AI leads to anxiety or adaptation for workers depends in part on what happens inside organizations. Using additional data collected over many years in the Gallup Workforce Panel covering more than 30,000 U.S. employees from 2023 to 2026, I found in a 2026 paper that workplace adoption of generative AI rose quickly over the period, with the share of workers using AI often increasing from 9% to 26%.

But the more important finding is that adoption was far more common where workers believed their organization had communicated a clear AI strategy and where employees said they trust leadership. This suggests that growing adoption and effective use of AI depends not only on the availability of the technology but on whether managers make its use clear, credible and safe.

Where that clarity exists, frequent AI use is associated with higher engagement and job satisfaction, and it even reverses the burnout penalties that appear elsewhere.

In other words, the broader economic effects of AI depend not only on how sophisticated the tools are but on whether companies and managers create environments where workers can experiment, reorganize tasks and integrate new tools into productive routines. That is, if employees do not feel the psychological safety to experiment, they are less likely to use AI, and they are especially less likely to use it for higher-value work.

That is precisely the kind of adaptation that I believe makes labor markets more resilient than the most alarmist forecasts suggest.The Conversation

About the Author:

Christos Makridis, Associate Research Professor of Information Systems, Arizona State University; Institute for Humane Studies

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

OpenAI has deleted the word ‘safely’ from its mission – and its new structure is a test for whether AI serves society or shareholders

By Alnoor Ebrahim, Tufts University 

OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”

While reviewing its latest IRS disclosure form, which was released in November 2025 and covers 2024, I noticed OpenAI had removed “safely” from its mission statement, among other changes. That change in wording coincided with its transformation from a nonprofit organization into a business increasingly focused on profits.

OpenAI currently faces several lawsuits related to its products’ safety, making this change newsworthy. Many of the plaintiffs suing the AI company allege psychological manipulation, wrongful death and assisted suicide, while others have filed negligence claims.

As a scholar of nonprofit accountability and the governance of social enterprises, I see the deletion of the word “safely” from its mission statement as a significant shift that has largely gone unreported – outside highly specialized outlets.

And I believe OpenAI’s makeover is a test case for how we, as a society, oversee the work of organizations that have the potential to both provide enormous benefits and do catastrophic harm.

Tracing OpenAI’s origins

OpenAI, which also makes the Sora video artificial intelligence app, was founded as a nonprofit scientific research lab in 2015. Its original purpose was to benefit society by making its findings public and royalty-free rather than to make money.

To raise the money that developing its AI models would require, OpenAI, under the leadership of CEO Sam Altman, created a for-profit subsidiary in 2019. Microsoft initially invested US$1 billion in this venture; by 2024 that sum had topped $13 billion.

In exchange, Microsoft was promised a portion of future profits, capped at 100 times its initial investment. But the software giant didn’t get a seat on OpenAI’s nonprofit board – meaning it lacked the power to help steer the AI venture it was funding.

A subsequent round of funding in late 2024, which raised $6.6 billion from multiple investors, came with a catch: that the funding would become debt unless OpenAI converted to a more traditional for-profit business in which investors could own shares, without any caps on profits, and possibly occupy board seats.

Establishing a new structure

In October 2025, OpenAI reached an agreement with the attorneys general of California and Delaware to become a more traditional for-profit company.

Under the new arrangement, OpenAI was split into two entities: a nonprofit foundation and a for-profit business.

The restructured nonprofit, the OpenAI Foundation, owns about one-fourth of the stock in a new for-profit public benefit corporation, the OpenAI Group. Both are headquartered in California but incorporated in Delaware.

A public benefit corporation is a business that must consider interests beyond shareholders, such as those of society and the environment, and it must issue an annual benefit report to its shareholders and the public. However, it is up to the board to decide how to weigh those interests and what to report in terms of the benefits and harms caused by the company.

The new structure is described in a signed in October 2025 by OpenAI and the California attorney general, and endorsed by the Delaware attorney general.

Many business media outlets heralded the move, predicting that it would usher in more investment. Two months later, SoftBank, a Japanese conglomerate, finalized a $41 billion investment in OpenAI.

Changing its mission statement

Most charities must file forms annually with the Internal Revenue Service with details about their missions, activities and financial status to show that they qualify for tax-exempt status. Because the IRS makes the forms public, they have become a way for nonprofits to signal their missions to the world.

In its forms for 2022, , OpenAI said its mission was “to build general-purpose artificial intelligence (AI) that safely benefits humanity, unconstrained by a need to generate financial return.”

This is the top of the front page of the 2023 990 form for OpenAI, with its mission stated at the bottom of the screenshot.
OpenAI’s mission statement as of 2023 included the word ‘safely.’
IRS via Candid

That mission statement has changed, as of – which the company filed with the IRS in late 2025. It became “to ensure that artificial general intelligence benefits all of humanity.”

This is the top of the front page of the 2024 990 form for OpenAI, with its mission stated at the bottom of the screenshot.
OpenAI’s mission statement as of 2024 no longer included the word ‘safely.’
IRS via Candid

OpenAI had dropped its commitment to safety from its mission statement – along with a commitment to being “unconstrained” by a need to make money for investors. According to Platformer, a tech media outlet, it has also disbanded its “mission alignment” team.

In my view, these changes explicitly signal that OpenAI is making its profits a higher priority than the safety of its products.

To be sure, OpenAI continues to mention safety when it discusses its mission. “We view this mission as the most important challenge of our time,” it states on its website. “It requires simultaneously advancing AI’s capability, safety, and positive impact in the world.”

Revising its legal governance structure

Nonprofit boards are responsible for key decisions and upholding their organization’s mission.

Unlike private companies, board members of tax-exempt charitable nonprofits cannot personally enrich themselves by taking a share of earnings. In cases where a nonprofit owns a for-profit business, as OpenAI did with its previous structure, investors can take a cut of profits – but they typically do not get a seat on the board or have an opportunity to elect board members, because that would be seen as a conflict of interest.

The OpenAI Foundation now has a 26% stake in OpenAI Group. In effect, that means that the nonprofit board has given up nearly three-quarters of its control over the company. Software giant Microsoft owns a slightly larger stake – 27% of OpenAI’s stock – due to its $13.8 billion investment in the AI company to date. OpenAI’s employees and its other investors own the rest of the shares.

Seeking more investment

The main goal of OpenAI’s restructuring, which it called a “recapitalization,” was to attract more private investment in the race for AI dominance.

It has already succeeded on that front.

As of early February 2026, the company was in talks with SoftBank for an additional $30 billion and stands to get up to a total of $60 billion from Amazon, Nvidia and Microsoft combined.

OpenAI is now valued at over $500 billion, up from $300 billion in March 2025. The new structure also paves the way for an eventual initial public offering, which, if it happens, would not only help the company raise more capital through stock markets but would also increase the pressure to make money for its shareholders.

OpenAI says the foundation’s endowment is worth about $130 billion.

Those numbers are only estimates because OpenAI is a privately held company without publicly traded shares. That means these figures are based on market value estimates rather than any objective evidence, such as market capitalization.

When he announced the new structure, California Attorney General Rob Bonta said, “We secured concessions that ensure charitable assets are used for their intended purpose.” He also predicted that “safety will be prioritized” and said the “top priority is, and always will be, protecting our kids.”

Steps that might help keep people safe

At the same time, several conditions in the OpenAI restructuring memo are designed to promote safety, including:

  1. A safety and security committee on the OpenAI Foundation board has the authority to that could potentially include the halting of a release of new OpenAI products based on assessments of their risks.
  2. The for-profit OpenAI Group has its own board, which must consider only OpenAI’s mission – rather than financial issues – regarding safety and security issues.
  3. The OpenAI Foundation’s nonprofit board gets to appoint all members of the OpenAI Group’s for-profit board.

But given that neither the mission of the foundation nor of the OpenAI group explicitly alludes to safety, it will be hard to hold their boards accountable for it.

Furthermore, since all but one board member currently serve on both boards, it is hard to see how they might oversee themselves. And doesn’t indicate whether he was aware of the removal of any reference to safety from the mission statement.

Identifying other paths OpenAI could have taken

There are alternative models that I believe would serve the public interest better than this one.

When Health Net, a California nonprofit health maintenance organization, converted to a for-profit insurance company in 1992, regulators required that 80% of its equity be transferred to another nonprofit health foundation. Unlike with OpenAI, the foundation had majority control after the transformation.

A coalition of California nonprofits has argued that the attorney general should require OpenAI to transfer all of its assets to an independent nonprofit.

Another example is The Philadelphia Inquirer. The Pennsylvania newspaper became a for-profit public benefit corporation in 2016. It belongs to the Lenfest Institute, a nonprofit.

This structure allows Philadelphia’s biggest newspaper to attract investment without compromising its purpose – journalism serving the needs of its local communities. It’s become a model for potentially transforming the local news industry.

At this point, I believe that the public bears the burden of two governance failures. One is that OpenAI’s board has apparently abandoned its mission of safety. And the other is that the attorneys general of California and Delaware have let that happen.The Conversation

About the Author:

Alnoor Ebrahim, Professor of International Business, The Fletcher School & Tisch College of Civic Life, Tufts University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Probability underlies much of the modern world – an engineering professor explains how it actually works

By Zachary del Rosario, Olin College of Engineering 

Probability underpins AI, cryptography and statistics. However, as the philosopher Bertrand Russell said, “Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means.”

I teach statistics to engineers, so I know that while probability is important, it is counterintuitive.

Probability is a branch of mathematics that describes randomness. When scientists describe randomness, they’re describing chance events – like a coin flip – not strange occurrences, like a person dressed as a zebra. While scientists do not have a way to predict strange occurrences, probability does predict long-run behavior – that is, the trends that emerge from many repeated events.

Left: A person in a zebra costume. Right: A coin in mid-air after being flipped. A hand is visible with thumb extended upward.
We may say ‘random’ to describe strange occurrences (person dressed as zebra), but probability describes chance events (a coin flip).
Zebras in La Paz, Bolivia by EEJCC, Own Work CC A-SA 4.0; https://commons.wikimedia.org/wiki/File:Zebra_La_Paz.jpg _ , CC BY-SA

Modeling with probability

Since probability is about events, a scientist must choose which events to study. This choice defines the sample space. When flipping a coin, for example, you might define your event as the way it lands.

Coins almost always land on heads or tails. However, it’s possible – if very unlikely – for a coin to land on its side. So to create a sample space, you’d have two choices: heads and tails, or heads, tails and side. For now, ignore the side landings and use heads and tails as our sample space.

Next, you would assign probabilities to the events. Probability describes the rate of occurrence of an event and takes values between 0% and 100%. For example, a fair flip will tend to land 50% heads up and 50% tails up.

To assign probabilities, however, you need to think carefully about the scenario. What if the person flipping the coin is a cheater? There’s a sneaky technique to “wobble” the coin without flipping, controlling the outcome. Even if you can prevent cheating, real coin flips are slightly more probable to land on their starting face – so if you start the flip with the coin heads up, it’s very slightly more likely to land heads up.

In both the cheating and real flip cases, you need an appropriate sample space: starting face and other face. To have a fair flip in the real world, you’d need an additional step where you randomly – with equal probability – choose the starting face, then flip the coin.

Three bar graphs displaying probabilities for different outcomes. The 'Fair' Flip assigns equal probability (50%) to both heads and tails. The Real Flip assigns 51% to the Starting Face and 49% to the Other Face. The Cheater's Flip assigns 100% to the Starting Face.
The probabilities for different coin-flipping scenarios.
Zachary del Rosario, CC BY-SA

These assumptions add up quickly. To have a fair flip, you had to ignore side landings, assume no one is cheating, and assume the starting face is evenly random. Together, these assumptions constitute a model for the coin flip with random outcomes. Probability tells us about the long-run behavior of a random model. In the case of the coin model, probability describes how many coins land on heads out of many flips.

But instead of using a random model, why not just solve the coin toss using physics? Actually, scientists have done just that, and the physics shows that slight changes in the speed of the flip determine whether it comes up heads or tails. This sensitivity makes a coin flip unpredictable, so a random model is a good one.

Frequency vs. probability

Probability differs from frequency, which is the rate of events in a sequence. For example, if you flip a coin eight times and get two heads, that’s a frequency of 25%. Even if the probability of flipping a coin and seeing heads is 50% over the long run, each short sequence of flips will come out different. Four heads and four tails is the most probable outcome from eight flips, but other events can – and will – happen.

Frequency and probability are the same in one special setting: when the number of data points goes to infinity. In this sense, probability tells us about long-run behavior.

A bar chart of probabilities for all possible outcomes of eight 'fair' coin flips. Four heads has the highest probability (~27%), and the distribution is symmetric around four heads.
Probabilities for all possible outcomes of eight ‘fair’ coin flips.
Zachary del Rosario, CC BY-SA

Applications to AI, cryptography and statistics

Probability isn’t just useful for predicting coin flips. It underlies many modern technological systems.

For example, AI systems such as large language models, or LLMs, are based on next-word prediction. Essentially, they compute a probability for the words that follow your prompt. For example, with the prompt “New York” you might get “City” or “State” as the predicted next word, because in the training data those are the words that most frequently follow.

But since probability describes randomness, the outputs of a LLM are random. Just like a sequence of coin flips is not guaranteed to come out the same way every time, if you ask an LLM the same question again, you will tend to get a different response. Effectively, each next word is treated like a new coin flip.

Randomness is also key to cryptography: the science of securing information. Cryptographic communication uses a shared secret, such as a password, to secure information. However, surprising randomness isn’t good enough for security, which is why picking a surprising word is a bad choice of password. A shared secret is only secure if it’s hard to guess. Even if a word is surprising, real words are easier to guess than flipping a “coin” for each letter.

You can make a much stronger password by using probability to choose characters at random on your keyboard – or better yet, use a password manager.

Finally, randomness is key in statistics. Statisticians are responsible for designing and analyzing studies to make use of limited data. This practice is especially important when studying medical treatments, because every data point represents a person’s life.

The gold standard is a randomized controlled trial. Participants are assigned to receive the new treatment or the current standard of care based on a fair coin flip. It may seem strange to do this assignment randomly – using coin flips to make decisions about lives. However, the unpredictability serves an important role, as it ensures that nothing about the person affects their chance to get the treatment: not age, gender, race, income or any other factor. The unpredictability helps scientists ensure that only the treatment causes the observed result and not any other factor.

So what does probability mean? Like any kind of math, it’s only a model, meaning it can’t perfectly describe the world. In the examples discussed, probability is useful for describing long-term behaviors and using unpredictability to solve practical problems.The Conversation

About the Author:

Zachary del Rosario, Assistant Professor of Engineering, Olin College of Engineering

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI Agent Firm With Payments Technology Expands Into Texas, New Jersey

Source: Streetwise Reports (2/17/26)

The FUTR Corp. (FTRC:TSX; FTRCF:OTC) announces it has signed agreements with four new dealerships, expanding the reach of its FUTR Payments product into Houston, Texas, and further strengthening its presence in New Jersey. Find out why one analyst says the company is uniquely positioned to take advantage of this “pivotal moment” in the history of AI.

The FUTR Corp. (FTRC:TSX; FTRCF:OTC) announced that it has signed agreements with four new auto dealerships, expanding the reach of its FUTR Payments product into Houston, Texas, and further strengthening its presence in New Jersey, according to a February 17 release from the company.

FUTR is the creator of the FUTR Agent App, which allows users to store, manage, access, and monetize their personal information and make real-time payments.

AI agents were at the center of what “is shaping up to be one of the most audacious branding plays in the history of the internet,” a new US$70 million mega deal for the domain name AI.com announced during last weekend’s Super Bowl.

The signed agreements mark FUTR Payments’ first dealer relationship in Texas and are the initial results of the company’s newly deployed sales resources aimed at geographic expansion and reinforcing its presence in existing markets. With these agreements, FUTR Payments’ U.S. footprint now includes Texas, New York, New Jersey, Delaware, Florida, Iowa, and Connecticut.

These initial dealership partnerships are expected to serve as reference points as FUTR Payments continues to grow its presence in U.S. regional markets. The company said it plans to provide future updates as more dealerships, dealer groups, and regions join the platform.

“Expanding our footprint in New Jersey and entering the Texas market represent an important inflection point for FUTR Payments as we scale our platform across major U.S. markets,” said FUTR Payments Chief Business Officer Mindy Bruns. “These early dealership partnerships validate the demand we’re seeing for intelligent payment infrastructure that helps consumers build financial security while enabling dealers to engage customers in more durable, data-driven ways. We believe this expansion is an early indicator of the broader national opportunity ahead.”

Texas is one of the largest and most diverse automotive markets in the United States, with a significant concentration of independent and used-vehicle dealerships. Trailer Wheel & Frame Co., the company’s first Houston dealer, introduces a new asset category for FUTR Payments, expanding the applicability of its intelligent payment rails beyond traditional automotive inventory. The newly signed agreement with Speedway Motors LLC in Paterson, N.J., further expands the company’s presence in New Jersey by adding three more storefronts.

FUTR Payments is part of FUTR’s broader strategy, which combines intelligent payment infrastructure with consented consumer data and AI-enabled Agents.

Analyst: A ‘Pivotal Moment’ in AI Marketing

According to an updated research note by Research Capital Corp. Analyst Greg McLeish on February 11, “This year’s Super Bowl marked a pivotal moment in how artificial intelligence is being marketed to consumers.”

AI-related advertising has emerged as a key theme, illustrating how quickly AI has transitioned from an abstract concept to a mainstream consumer offering, Business Insider reported. The focus has shifted from chatbots to utility-driven AI systems that can perform tasks on behalf of users. In this context, Crypto.com’s launch of AI.com garnered significant attention and traffic, highlighting the growing interest in “AI agents that do things,” rather than systems that merely respond to prompts. Additionally, OpenClaw’s viral success in late January provided further validation, showing that autonomous, task-executing agents are gaining traction across both consumer platforms and developer communities. These developments indicate that AI agents are becoming mainstream as everyday utilities, rather than novelty tools.

“Crypto.com’s Super Bowl debut of AI.com reinforced that AI agents are moving decisively beyond experimental chat interfaces into mass-market, action-oriented tools,” McLeish wrote. “By committing roughly US$70 million for the AI.com domain and positioning its product as a ‘private AI agent,’ the company highlighted a shift toward autonomous digital assistants capable of managing schedules, automating workflows, and completing tasks on behalf of users. Post-game traffic reportedly overwhelmed early infrastructure, reinforcing strong initial engagement and signaling that consumer adoption is increasingly driven by execution rather than conversation.”

While recent agent launches validate the rising demand for autonomous AI, many early entrants lack the infrastructure needed for durable, real-world deployment, the analyst said.

“The FUTR Corp. is differentiated by anchoring its AI Agent in a SOC 2–compliant digital vault and embedding it directly into regulated financial workflows,” McLeish wrote. “FUTR’s platform combines compliance-grade data infrastructure, live banking and payment rails, and enterprise integrations that allow an agent not only to recommend actions, but to execute them securely across payments, credit, insurance, and home finance. Unlike consumer-facing agents that rely on generic cloud access, or open-source solutions that place security and operational burdens on the user, FUTR’s agent operates within institutional guardrails and is distributed through enterprise partnerships.”

McLeish continued, “In our view, this positions FUTR as an AI-native financial infrastructure layer, rather than a chatbot alternative, as agents move from novelty to necessity.”

Research Capital Corp. maintained its Speculative Buy rating on the stock with a CA$3 target price, a 991% return based on a sum-of-the-parts valuation taken at the time of writing.

“The result is a high-conviction opportunity at the intersection of consumer data, tokenized incentives, and privacy-first infrastructure,” McLeish said.

Most Expensive Domain Purchase in History

The US$70 million acquisition of AI.com by Crypto.com founder Kris Marszalek marks the most expensive domain purchase in history, paid entirely in cryptocurrency to an undisclosed seller, as reported by the Financial Times and covered by Connie Loizos for TechCrunch on February 8. This transaction sets a new standard in domain sales, surpassing previous record holders such as CarInsurance.com at US$49.7 million (2010), VacationRentals.com at US$35 million (2007), and Voice.com at US$30 million (2019).

In a letter to shareholders following the announcement, Alex McDougall, CEO of The FUTR Corp. (FTRC:TSX; FTRCF:OTC), stated that this acquisition “officially marks the beginning of functional AI Agents going mainstream.”

McDougall expressed the belief that this will become “the largest category the world has ever seen and as foundational as the advent of the internet.” He emphasized that FUTR is “ideally positioned to be at the front of the wave that is here.”

McDougall outlined the company’s progress: “We have set up the infrastructure in Q2, built the technology stack through Q3, signed the first wave of commercial partnerships through Q4, and now in Q1 it’s coming to market and the timing couldn’t be better” for FUTR’s agent. The company’s agent offers significant real-world utility, such as rewarding users for taking a picture of their property tax slip, knowing when those taxes are due, helping reserve cash flow in the budget to pay the tax, reminding users 15 days before the due date, comparing property taxes to other neighborhoods and home values, making the payment, and even reporting the payment to credit bureaus to maximize credit scores.

“That’s deep real-world utility,” McDougall said.

AI Agents Tailored to Your Data

According to FUTR, the AI agent within their app is not only easily accessible but also operates under your guidance, tailored specifically to your data. This AI is designed for individuals and works exclusively for you around the clock to accomplish your tasks. It integrates data from various sources and smoothly handles complex financial queries and services. “Chat GPT can find you information. It can order you food. It can do things in your browser,” McDougall told Streetwise Reports. But “FUTR can take your insurance policy, tell you where it’s good, where it’s bad, and find you a better one from our curated brand partners. If you put your mortgage into it, FUTR can read it, learn about it, tell you what clauses are suspect, find a better payment schedule for you, and then actually connect to payment rails and make those payments for you to take that intelligence and turn it into real action.”

For renters, FUTR could track when to renew your lease and report your rent payments to the credit bureau to help build your credit. “That’s really a key differentiator,” McDougall said. “There are a lot of AI agents that can tell you things. FUTR can go and do things for you.”

In a unique feature, FUTR tokens, created by the FUTR Foundation on the BASE Blockchain that powers the FUTR ecosystem, reward consumers and enterprises with tokens for sharing data, which they can use to purchase goods and services from FUTR brand partners. “Brands can purchase FUTR tokens or earn them from consumers and use those tokens to pay for leads from FUTR,” the company stated on its website.

According to the company, upcoming catalysts that could impact the stock price include the broad launch of the FUTR AI Agent App and FUTR Token sometime this quarter. The company also plans to introduce a FUTR Visa card.

The Catalyst: ‘Your Person For Everything’

According to the company, FUTR’s agent “can be your person for everything,” as McDougall explained. Unlike Chat GPT, which is designed for billions and processes data in a generalized way, the FUTR AI creates a personalized AI stack for each user, which the company refers to as “high fidelity AI.” A key advantage is that instead of your data being monetized without your knowledge, “every piece of data that goes into this agent and into this engine, you’re getting paid for it,” he said.

AI-powered shopping, with agents like FUTR’s acting on our behalf, signifies a major shift in the marketplace, according to a report by McKinsey & Co. This development points to a future where AI anticipates consumer needs, explores shopping options, negotiates deals, and completes transactions, all aligned with human intentions but operating independently through multistep processes enabled by reasoning models.

“This isn’t just an evolution of e-commerce,” the report stated. “It’s a rethinking of shopping itself in which the boundaries between platforms, services, and experiences give way to an integrated intent-driven flow, through highly personalized consumer journeys that deliver a fast, frictionless outcome.”

Streetwise Ownership Overview*

Insiders and Management: 23%
Share Structure as of 2/3/2026

By 2030, the U.S. B2C retail market alone could see up to US$1 trillion in orchestrated revenue from agentic commerce, with global estimates ranging from US$3 trillion to US$5 trillion, according to McKinsey research. This trend is expected to have an impact comparable to previous web and mobile-commerce revolutions, but it could progress even more rapidly since agents can navigate the same digital paths to purchase as humans, effectively “riding on the rails” established by these earlier transformations, researchers noted.

“This presents both benefits and risks for today’s commerce ecosystem,” McKinsey explained. “All kinds of businesses — brands, retailers, marketplaces, logistics and commerce services providers, and payments players — will need to adapt to the new paradigm and successfully navigate the challenges of trust, risk, and innovation.”

Ownership and Share Structure1

Approximately 23% of the company is owned by management and insiders. The remainder is held by retail investors.

Top shareholders include G. Scott Paterson with 8.38%, Melrose Ventures LLC with 2.08%, Michael Hillmer with 0.74%, Ashish Kapoor with 0.55%, and Jason G. Ewart with 0.52%.

The company’s market cap on February 12 was CA$35.1 million with 125.36 million shares outstanding. It trades within a 52-week range of CA$0.09 and CA$0.42.


Important Disclosures:

  1. The FUTR Corp. is a billboard sponsor of Streetwise Reports and pays SWR a monthly sponsorship fee between US$3,000 and US$6,000.
  2. As of the date of this article, officers, contractors, shareholders, and/or employees of Streetwise Reports LLC (including members of their household) own securities of The FUTR Corp.
  3. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  4. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports’ terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

1. Ownership and Share Structure Information

The information listed above was updated on the date this article was published and was compiled from information from the company and various other data providers.

Data centers told to pitch in as storms and cold weather boost power demand

By Nikki Luke, University of Tennessee and Conor Harrison, University of South Carolina 

As Winter Storm Fern swept across the United States in late January 2026, bringing ice, snow and freezing temperatures, it left more than a million people without power, mostly in the Southeast.

Scrambling to meet higher than average demand, PJM, the nonprofit company that operates the grid serving much of the mid-Atlantic U.S., asked for federal permission to generate more power, even if it caused high levels of air pollution from burning relatively dirty fuels.

Energy Secretary Chris Wright agreed and took another step, too. He authorized PJM and ERCOT – the company that manages the Texas power grid – as well as Duke Energy, a major electricity supplier in the Southeast, to tell data centers and other large power-consuming businesses to turn on their backup generators.

The goal was to make sure there was enough power available to serve customers as the storm hit. Generally, these facilities power themselves and do not send power back to the grid. But Wright explained that their “industrial diesel generators” could “generate 35 gigawatts of power, or enough electricity to power many millions of homes.”

We are scholars of the electricity industry who live and work in the Southeast. In the wake of Winter Storm Fern, we see opportunities to power data centers with less pollution while helping communities prepare for, get through and recover from winter storms.

Data centers use enormous quantities of energy

Before Wright’s order, it was hard to say whether data centers would reduce the amount of electricity they take from the grid during storms or other emergencies.

This is a pressing question, because data centers’ power demands to support generative artificial intelligence are already driving up electricity prices in congested grids like PJM’s.

And data centers are expected to need only more power. Estimates vary widely, but the Lawrence Berkeley National Lab anticipates that the share of electricity production in the U.S. used by data centers could spike from 4.4% in 2023 to between 6.7% and 12% by 2028. PJM expects a peak load growth of 32 gigawatts by 2030 – enough power to supply 30 million new homes, but nearly all going to new data centers. PJM’s job is to coordinate that energy – and figure out how much the public, or others, should pay to supply it.

The race to build new data centers and find the electricity to power them has sparked enormous public backlash about how data centers will inflate household energy costs. Other concerns are that power-hungry data centers fed by natural gas generators can hurt air quality, consume water and intensify climate damage. Many data centers are located, or proposed, in communities already burdened by high levels of pollution.

Local ordinances, regulations created by state utility commissions and proposed federal laws have tried to protect ratepayers from price hikes and require data centers to pay for the transmission and generation infrastructure they need.

Always-on connections?

In addition to placing an increasing burden on the grid, many data centers have asked utility companies for power connections that are active 99.999% of the time.

But since the 1970s, utilities have encouraged “demand response” programs, in which large power users agree to reduce their demand during peak times like Winter Storm Fern. In return, utilities offer financial incentives such as bill credits for participation.

Over the years, demand response programs have helped utility companies and power grid managers lower electricity demand at peak times in summer and winter. The proliferation of smart meters allows residential customers and smaller businesses to participate in these efforts as well. When aggregated with rooftop solar, batteries and electric vehicles, these distributed energy resources can be dispatched as “virtual power plants.”

A different approach

The terms of data center agreements with local governments and utilities often aren’t available to the public. That makes it hard to determine whether data centers could or would temporarily reduce their power use.

In some cases, uninterrupted access to power is necessary to maintain critical data systems, such as medical records, bank accounts and airline reservation systems.

Yet, data center demand has spiked with the AI boom, and developers have increasingly been willing to consider demand response. In August 2025, Google announced new agreements with Indiana Michigan Power and the Tennessee Valley Authority to provide “data center demand response by targeting machine learning workloads,” shifting “non-urgent compute tasks” away from times when the grid is strained. Several new companies have also been founded specifically to help AI data centers shift workloads and even use in-house battery storage to temporarily move data centers’ power use off the grid during power shortages.

Flexibility for the future

One study has found that if data centers would commit to using power flexibly, an additional 100 gigawatts of capacity – the amount that would power around 70 million households – could be added to the grid without adding new generation and transmission.

In another instance, researchers demonstrated how data centers could invest in offsite generation through virtual power plants to meet their generation needs. Installing solar panels with battery storage at businesses and homes can boost available electricity more quickly and cheaply than building a new full-size power plant. Virtual power plants also provide flexibility as grid operators can tap into batteries, shift thermostats or shut down appliances in periods of peak demand. These projects can also benefit the buildings where they are hosted.

Distributed energy generation and storage, alongside winterizing power lines and using renewables, are key ways to help keep the lights on during and after winter storms.

Those efforts can make a big difference in places like Nashville, Tennessee, where more than 230,000 customers were without power at the peak of outages during Fern, not because there wasn’t enough electricity for their homes but because their power lines were down.

The future of AI is uncertain. Analysts caution that the AI industry may prove to be a speculative bubble: If demand flatlines, they say, electricity customers may end up paying for grid improvements and new generation built to meet needs that would not actually exist.

Onsite diesel generators are an emergency solution for large users such as data centers to reduce strain on the grid. Yet, this is not a long-term solution to winter storms. Instead, if data centers, utilities, regulators and grid operators are willing to also consider offsite distributed energy to meet electricity demand, then their investments could help keep energy prices down, reduce air pollution and harm to the climate, and help everyone stay powered up during summer heat and winter cold.The Conversation

About the Author:

Nikki Luke, Assistant Professor of Human Geography, University of Tennessee and Conor Harrison, Associate Professor of Economic Geography, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Moore’s law: the famous rule of computing has reached the end of the road, so what comes next?

By Domenico Vicinanza, Anglia Ruskin University 

For half a century, computing advanced in a reassuring, predictable way. Transistors – devices used to switch electrical signals on a computer chip – became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing.

These faster chips enable greater computing power by allowing devices to perform tasks more efficiently. As a result, we saw scientific simulations improving, weather forecasts becoming more accurate, graphics more realistic, and later, machine learning systems being developed and flourishing. It looked as if computing power itself obeyed a natural law.

This phenomenon became known as Moore’s Law, after the businessman and scientist Gordon Moore. Moore’s Law summarised the empirical observation that the number of transistors on a chip approximately doubled every couple of years. This also allows the size of devices to shrink, so it drives miniaturisation.

That sense of certainty and predictability has now gone, and not because innovation has stopped, but because the physical assumptions that once underpinned it no longer hold.

So what replaces the old model of automatic speed increases? The answer is not a single breakthrough, but several overlapping strategies.

One involves new materials and transistor designs. Engineers are refining how transistors are built to reduce wasted energy and unwanted electrical leakage. These changes deliver smaller, more incremental improvements than in the past, but they help keep power use under control.

Another approach is changing how chips are physically organised. Rather than placing all components on a single flat surface, modern chips increasingly stack parts on top of each other or arrange them more closely. This reduces the distance that data has to travel, saving both time and energy.

Perhaps the most important shift is specialisation. Instead of one general-purpose processor trying to do everything, modern systems combine different kinds of processors. Traditional processing units or CPUs handle control and decision-making. Graphics processors, are powerful processing units that were originally designed to handle the demands of graphics for computer games and other tasks. AI accelerators (specialised hardware that speeds up AI tasks) focus on large numbers of simple calculations carried out in parallel. Performance now depends on how well these components work together, rather than on how fast any one of them is.

Alongside these developments, researchers are exploring more experimental technologies, including quantum processors (which harness the power of quantum science) and photonic processors, which use light instead of electricity.

These are not general-purpose computers, and they are unlikely to replace conventional machines. Their potential lies in very specific areas, such as certain optimisation or simulation problems where classical computers can struggle to explore large numbers of possible solutions efficiently. In practice, these technologies are best understood as specialised co-processors, used selectively and in combination with traditional systems.

For most everyday computing tasks, improvements in conventional processors, memory systems and software design will continue to matter far more than these experimental approaches.

For users, life after Moore’s Law does not mean that computers stop improving. It means that improvements arrive in more uneven and task-specific ways. Some applications, such as AI-powered tools, diagnostics, navigation, complex modelling, may see noticeable gains, while general-purpose performance increases more slowly.

New technologies

At the Supercomputing SC25 conference in St Louis, hybrid systems that mix CPUs (processors) and GPUs (graphics processing units) with emerging technologies such as quantum or photonic processors were increasingly presented and discussed as practical extensions of classical computing. For most everyday tasks, improvements in classical processors, memories and software will continue to deliver the biggest gains.

But there is growing interest in using quantum and photonic devices as co-
processors, not replacements. Their appeal lies in tackling specific classes of
problems, such as complex optimisation or routing tasks, where finding low-energy
or near-optimal solutions can be exponentially expensive for classical machines
alone.

In this supporting role, they offer a credible way to combine the reliability of
classical computing with new computational techniques that expand what these
systems can do.

Life after Moore’s Law is not a story of decline, but one that requires constant
transformation and evolution. Computing progress now depends on architectural
specialisation, careful energy management, and software that is deeply aware of
hardware constraints. The danger lies in confusing complexity with inevitability, or marketing narratives with solved problems.

The post-Moore era forces a more honest relationship with computation where performance is not anymore something we inherit automatically from smaller transistors, but it is something we must design, justify, and pay for, in energy, in complexity, and in trade-offs.The Conversation

About the Author:

Domenico Vicinanza, Associate Professor of Intelligent Systems and Data Science, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI-induced cultural stagnation is no longer speculation − it’s already happening

By Ahmed Elgammal, Rutgers University 

Generative AI was trained on centuries of art and writing produced by humans.

But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs.

A new study points to some answers.

In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger Åström and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomously – generating and interpreting their own outputs without human intervention.

The researchers linked a text-to-image system with an image-to-text system and let them iterate – image, caption, image, caption – over and over and over.

Regardless of how diverse the starting prompts were – and regardless of how much randomness the systems were allowed – the outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings and pastoral landscapes. Even more striking, the system quickly “forgot” its starting prompt.

The researchers called the outcomes “visual elevator music” – pleasant and polished, yet devoid of any real meaning.

For example, they started with the image prompt, “The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action.” The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image.

After repeating this loop, the researchers ended up with a bland image of a formal interior space – no people, no drama, no real sense of time and place.

A collage of AI-generated images that begins with a politician surrounded by policy papers and progresses to a room with fancy red curtains.
A prompt that begins with a prime minister under stress ends with an image of an empty room with fancy furnishings.
Arend Hintze, Frida Proschinger Åström and Jory Schossau, CC BY

As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation.

The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default.

The familiar is the default

This experiment may appear beside the point: Most people don’t ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use.

But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes.

This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered and regenerated as it moves between words, images and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch.

The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable and easy to regenerate.

Cultural stagnation or acceleration?

For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation.

Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions.

What has been missing from this debate is empirical evidence showing where homogenization actually begins.

The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally produce – when used autonomously and repeatedly – is already compressed and generic.

This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable and the conventional.

Retraining would amplify this effect. But it is not its source.

This is no moral panic

Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression.

But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural products – news stories, songs, memes, academic papers, photographs or social media posts – millions of times per day, guided by the same built-in assumptions about what is “typical.”

The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions.

This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risk – not a speculative fear – if generative systems are left to operate in their current iteration.

They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space.

In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence.

This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward what’s typical rather than what’s unique or creative.

Lost in translation

Whenever you write a caption for an image, details will be lost. Likewise for generating an image from text. And this happens whether it’s being performed by a human or a machine.

In that sense, the convergence that took place is not a failure that’s unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist.

But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic.

The implication is sobering: Even with human guidance – whether that means writing prompts, selecting outputs or refining results – these systems are still stripping away some details and amplifying others in ways that are oriented toward what’s “average.”

If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression.

The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content.

Cultural stagnation is no longer speculation. It’s already happening.The Conversation

About the Author:

Ahmed Elgammal, Professor of Computer Science and Director of the Art & AI Lab, Rutgers University

This article is republished from The Conversation under a Creative Commons license. Read the original article.