Archive for Programming – Page 2

Companies are already using agentic AI to make decisions, but governance is lagging behind

By Murugan Anandarajan, Drexel University 

Businesses are acting fast to adopt agentic AI – artificial intelligence systems that work without human guidance – but have been much slower to put governance in place to oversee them, a new survey shows. That mismatch is a major source of risk in AI adoption. In my view, it’s also a business opportunity.

I’m a professor of management information systems at Drexel University’s LeBow College of Business, which recently surveyed more than 500 data professionals through its Center for Applied AI & Business Analytics. We found that 41% of organizations are using agentic AI in their daily operations. These aren’t just pilot projects or one-off tests. They’re part of regular workflows.

At the same time, governance is lagging. Only 27% of organizations say their governance frameworks are mature enough to monitor and manage these systems effectively.

In this context, governance is not about regulation or unnecessary rules. It means having policies and practices that let people clearly influence how autonomous systems work, including who is responsible for decisions, how behavior is checked, and when humans should get involved.

This mismatch can become a problem when autonomous systems act in real situations before anyone can intervene.

For example, during a recent power outage in San Francisco, autonomous robotaxis got stuck at intersections, blocking emergency vehicles and confusing other drivers. The situation showed that even when autonomous systems behave “as designed,” unexpected conditions can lead to undesirable outcomes.

This raises a big question: When something goes wrong with AI, who is responsible – and who can intervene?

Why governance matters

When AI systems act on their own, responsibility no longer lies where organizations expect it. Decisions still happen, but ownership is harder to trace. For instance, in financial services, fraud detection systems increasingly act in real time to block suspicious activity before a human ever reviews the case. Customers often only find out when their card is declined.

So, what if your card is mistakenly declined by an AI system? In that situation, the problem isn’t with the technology itself – it’s working as it was designed – but with accountability. Research on human-AI governance shows that problems happen when organizations don’t clearly define how people and autonomous systems should work together. This lack of clarity makes it hard to know who is responsible and when they should step in.

Without governance designed for autonomy, small issues can quietly snowball. Oversight becomes sporadic and trust weakens, not because systems fail outright, but because people struggle to explain or stand behind what the systems do.

When humans enter the loop too late

In many organizations, humans are technically “in the loop,” but only after autonomous systems have already acted. People tend to get involved once a problem becomes visible – when a price looks wrong, a transaction is flagged or a customer complains. By that point, the system has already been decided, and human review becomes corrective rather than supervisory.

Late intervention can limit the fallout from individual decisions, but it rarely clarifies who is accountable. Outcomes may be corrected, yet responsibility remains unclear.

Recent guidance shows that when authority is unclear, human oversight becomes informal and inconsistent. The problem is not human involvement, but timing. Without governance designed upfront, people act as a safety valve rather than as accountable decision-makers.

How governance determines who moves ahead

Agentic AI often brings fast, early results, especially when tasks are first automated. Our survey found that many companies see these early benefits. But as autonomous systems grow, organizations often add manual checks and approval steps to manage risk.

Over time, what was once simple slowly becomes more complicated. Decision-making slows down, work-arounds increase, and the benefits of automation fade. This happens not because the technology stops working, but because people never fully trust autonomous systems.

This slowdown doesn’t have to happen. Our survey shows a clear difference: Many organizations see early gains from autonomous AI, but those with stronger governance are much more likely to turn those gains into long-term results, such as greater efficiency and revenue growth. The key difference isn’t ambition or technical skills, but being prepared.

Good governance does not limit autonomy. It makes it workable by clarifying who owns decisions, how systems function is monitored, and when people should intervene. International guidance from the OECD – the Organization for Economic Cooperation and Development – emphasizes this point: Accountability and human oversight need to be designed into AI systems from the start, not added later.

Rather than slowing innovation, governance creates the confidence organizations need to extend autonomy instead of quietly pulling it back.

The next advantage is smarter governance

The next competitive advantage in AI will not come from faster adoption, but from smarter governance. As autonomous systems take on more responsibility, success will belong to organizations that clearly define ownership, oversight and intervention from the start.

In the era of agentic AI, confidence will accrue to the organizations that govern best, not simply those that adopt first.The Conversation

About the Author:

Murugan Anandarajan, Professor of Decision Sciences and Management Information Systems, Drexel University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Despite its steep environmental costs, AI might also help save the planet

By Nir Kshetri, University of North Carolina – Greensboro 

The rapid growth of artificial intelligence has sharply increased electricity and water consumption, raising concerns about the technology’s environmental footprint and carbon emissions. But the story is more complicated than that.

I study emerging technologies and how their development and deployment influence economic, institutional and societal outcomes, including environmental sustainability. From my research, I see that even as AI uses a lot of energy, it can also make systems cleaner and smarter.

AI is already helping to save energy and water, cut emissions and make businesses more efficient in agriculture, data centers, the energy industry, building heating and cooling, and aviation.

Agriculture

Agriculture is responsible for nearly 70% of the world’s freshwater use, and competition for water is growing.

AI is helping farmers use water more efficiently. Argentinian climate tech startup Kilimo, for example, tackles water scarcity with an AI-powered irrigation platform. The software uses large amounts of data, machine learning, and weather and satellite measurements to determine when and how much to water which areas of fields, ensuring that only the plants that actually need water receive it.

Chile’s Ministry of Agriculture has found that in that country’s Biobío region, farms using Kilimo’s precision irrigation systems have reduced water use by up to 30% while avoiding overirrigation. Using less water also reduces the amount of energy needed to pump it from the ground and around a farm.

Kilimo is one example that shows how AI can create economic incentives for sustainability: The amount of water farmers save from precision irrigation is verified, and credits for those savings are sold to local companies that want to offset some of their water use. The farmers then earn a profit – often 20% to 40% above their initial investment.

Data centers

U.S. data centers consumed about 176 terawatt-hours of electricity in 2023, accounting for roughly 4.4% of total U.S. electricity use. This number increased to 183 TWh in 2024. This growing energy footprint has made improving data center efficiency a critical priority for the operators of the data centers themselves, as well as the companies that rely on them – including cloud providers, tech firms and large enterprises running AI workloads – both to reduce costs and meet sustainability and regulatory goals.

AI is helping data centers become more efficient. The number of global internet users grew from 1.9 billion in 2010 to 5.6 billion in 2025. Global internet traffic surged from 20.2 exabytes per month in 2010 to 521.9 exabytes per month in 2025 – a more than 25-fold increase.

Despite the surge in internet traffic and users, data center electricity consumption has grown more moderately, rising from 1% of global electricity use in 2010 to 2% in 2025. Much of this is thanks to efficiency gains, including those enabled by AI.

AI systems analyze operational data in data centers – including workloads, temperature, cooling efficiency and energy use – to spot energy-hungry tasks. It adjusts computing resources to match demand and optimizes cooling. This lets data centers run smoothly without wasting electricity.

At Microsoft, AI is improving energy efficiency by using predictive analytics to schedule computing tasks. This lets servers enter low-power modes during periods of low demand, saving electricity during slower times. Meta uses AI to control cooling and airflow in its data centers. The systems stay safe while using less energy than they might otherwise.

In Frankfurt, Germany, Equinix uses AI to manage cooling and adjust energy use at its data center based on real-time weather. This improved operational efficiency by 9%, The New York Times reported.

Energy and fuels

Energy companies are using AI to boost efficiency and cut emissions. They deploy drones with cameras to inspect pipelines. AI systems analyze the images to more quickly detect corrosion, cracks, dents and leaks, which allows problems to be addressed before they escalate, improving overall safety and reliability.

Shell has AI systems that monitor methane emissions from its facilities by analyzing methane concentrations and wind data, such as speed and direction. This helps the system track how methane disperses, enabling it to pinpoint emission sources and optimize energy use. By identifying the largest leaks quickly, the system allows targeted maintenance and operational adjustments to further reduce emissions. Using that technology, the company says it aims to nearly eliminate methane leaks by 2030.

AI could speed up innovation in clean energy by improving solar panels, batteries and carbon-capture systems. In the longer term, it could enable major breakthroughs, including advanced biofuels or even usable nuclear fusion, while helping track and manage carbon-absorbing resources such as forests, wetlands and carbon storage facilities.

Shell uses AI across its operations to cut emissions. Its process optimizer for liquefied natural gas analyzes sensor data to find more efficient equipment settings, boosting energy efficiency and reducing emissions.

Buildings and district heating

The energy needed to heat, cool and power buildings is responsible for roughly 28% of total global emissions. AI initiatives are starting to reduce building emissions through smart management and predictive optimization.

In downtown Copenhagen, for instance, the local utility company HOFOR deployed thousands of sensors tracking temperatures, humidity and building energy flows. The system uses information about each building to forecast heating needs 24 hours in advance and automatically adjust supply to match demand.

The Copenhagen system was first piloted in schools and multifamily housing, with support from the Nordic Smart City Network and climate-innovation grants. It has since expanded to dozens of sites. Results were clear: Across participating buildings, energy use fell 15% to 25%, peak heating demand dropped by up to 30%, and carbon dioxide emissions decreased by around 10,000 tonnes per year.

AI can also help households and offices save energy. Smart home systems optimize heating, cooling and appliance use. Researchers at the Lawrence Berkeley National Laboratory found that by adopting AI, medium-sized office buildings in the U.S. could reduce energy use by 21% and cut carbon dioxide emissions by 35%.

Aviation

About 2% of all human-caused carbon dioxide emissions in 2023 came from aviation, which emitted about 882 megatons of carbon dioxide.

Contrails, the thin ice clouds formed when aircraft exhaust freezes at cruising altitudes, contribute more than one-third of aviation’s overall warming effect by trapping heat in the atmosphere. AI can optimize flight routes and altitudes in real time to reduce contrail formation by avoiding areas where the air is more humid and therefore more likely to produce contrails.

Airlines have also used AI to improve fuel efficiency. In 2023, Alaska Airlines used 1.2 million gallons less fuel by using AI to analyze weather, wind, turbulence, airspace restrictions and traffic to recommend the most efficient routes, saving around 5% on fuel and emissions for longer flights.

In short, AI affects the environment in both positive and negative ways. Already, it has helped industries cut energy use, lower emissions and use water more efficiently. Expanding these solutions could drive a cleaner, more sustainable planet.The Conversation

About the Author:

Nir Kshetri, Professor of Management, University of North Carolina – Greensboro

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Google’s proposed data center in orbit will face issues with space debris in an already crowded orbit

By Mojtaba Akhavan-Tafti, University of Michigan 

The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, medium-sized data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city.

Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers.

In space, sunshine – which solar panels can convert into electricity – is abundant and reliable. On Nov. 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So, instead of beaming power back to Earth, the constellation would beam data back to Earth.

For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space.

As a technology entrepreneur, I applaud Google’s ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris.

The mathematics of disaster

Space debris – the collection of defunct human-made objects in Earth’s orbit – is already affecting space agencies, companies and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites.

Space debris travels at hypersonic speeds of approximately 17,500 miles per hour (28,000 km/h) in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil.

Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceX’s Starlink. The Starlink network has more than 7,500 satellites, which provide global high-speed internet.

The U.S. Space Force actively tracks over 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to reliably identify and track.

In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the United States and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station.

The orbital shell Google’s project targets – a Sun-synchronous orbit approximately 400 miles (650 kilometers) above Earth – is a prime location for uninterrupted solar energy. At this orbit, the spacecraft’s solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, Sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris.

As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable.

Implications for Project Suncatcher

Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just one kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 miles per hour – while separated by gaps about the distance you need to safely brake on the highway.

This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to “think” and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototype satellites by early 2027 to validate the hardware.

But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites – this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind.

To add to this complexity, streams of particles and magnetic fields from the Sun – known as space weather – can cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag.

When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield.

The importance of active avoidance

To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with “reflexes” that autonomously detect and dance through a debris field. Suncatcher’s current design doesn’t include these active avoidance capabilities.

In the first six months of 2025 alone, SpaceX’s Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds.

Today’s object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time.

Equipping Suncatcher with active collision avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor.

Detecting space debris in orbit can help prevent collisions.

Paying rent for the orbit

Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the mission’s completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellite’s altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere.

However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use-tax for space debris removal.

A use-tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris removal missions, which capture and remove the most dangerous pieces of junk.

Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris removal programs can help keep low Earth orbit open for business.The Conversation

About the Author:

Mojtaba Akhavan-Tafti, Associate Research Scientist, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI agents arrived in 2025 – here’s what happened and the challenges ahead in 2026

By Thomas Şerban von Davier, Carnegie Mellon University 

In artificial intelligence, 2025 marked a decisive shift. Systems once confined to research labs and prototypes began to appear as everyday tools. At the center of this transition was the rise of AI agents – AI systems that can use other software tools and act on their own.

While researchers have studied AI for more than 60 years, and the term “agent” has long been part of the field’s vocabulary, 2025 was the year the concept became concrete for developers and consumers alike.

AI agents moved from theory to infrastructure, reshaping how people interact with large language models, the systems that power chatbots like ChatGPT.

In 2025, the definition of AI agent shifted from the academic framing of systems that perceive, reason and act to AI company Anthropic’s description of large language models that are capable of using software tools and taking autonomous action. While large language models have long excelled at text-based responses, the recent change is their expanding capacity to act, using tools, calling APIs, coordinating with other systems and completing tasks independently.

This shift did not happen overnight. A key inflection point came in late 2024, when Anthropic released the Model Context Protocol. The protocol allowed developers to connect large language models to external tools in a standardized way, effectively giving models the ability to act beyond generating text. With that, the stage was set for 2025 to become the year of AI agents.

AI agents are a whole new ballgame compared with generative AI.

The milestones that defined 2025

The momentum accelerated quickly. In January, the release of Chinese model DeepSeek-R1 as an open-weight model disrupted assumptions about who could build high-performing large language models, briefly rattling markets and intensifying global competition. An open-weight model is an AI model whose training, reflected in values called weights, is publicly available. Throughout 2025, major U.S. labs such as OpenAI, Anthropic, Google and xAI released larger, high-performance models, while Chinese tech companies including Alibaba, Tencent, and DeepSeek expanded the open-model ecosystem to the point where the Chinese models have been downloaded more than American models.

Another turning point came in April, when Google introduced its Agent2Agent protocol. While Anthropic’s Model Context Protocol focused on how agents use tools, Agent2Agent addressed how agents communicate with each other. Crucially, the two protocols were designed to work together. Later in the year, both Anthropic and Google donated their protocols to the open-source software nonprofit Linux Foundation, cementing them as open standards rather than proprietary experiments.

These developments quickly found their way into consumer products. By mid-2025, “agentic browsers” began to appear. Tools such as Perplexity’s Comet, Browser Company’s Dia, OpenAI’s GPT Atlas, Copilot in Microsoft’s Edge, ASI X Inc.’s Fellou, MainFunc.ai’s Genspark, Opera’s Opera Neon and others reframed the browser as an active participant rather than a passive interface. For example, rather than helping you search for vacation details, it plays a part in booking the vacation.

At the same time, workflow builders like n8n and Google’s Antigravity lowered the technical barrier for creating custom agent systems beyond what has already happened with coding agents like Cursor and GitHub Copilot.

New power, new risks

As agents became more capable, their risks became harder to ignore. In November, Anthropic disclosed how its Claude Code agent had been misused to automate parts of a cyberattack. The incident illustrated a broader concern: By automating repetitive, technical work, AI agents can also lower the barrier for malicious activity.

This tension defined much of 2025. AI agents expanded what individuals and organizations could do, but they also amplified existing vulnerabilities. Systems that were once isolated text generators became interconnected, tool-using actors operating with little human oversight.

The business community is gearing up for multiagent systems.

What to watch for in 2026

Looking ahead, several open questions are likely to shape the next phase of AI agents.

One is benchmarks. Traditional benchmarks, which are like a structured exam with a series of questions and standardized scoring, work well for single models, but agents are composite systems made up of models, tools, memory and decision logic. Researchers increasingly want to evaluate not just outcomes, but processes. This would be like asking students to show their work, not just provide an answer.

Progress here will be critical for improving reliability and trust, and ensuring that an AI agent will perform the task at hand. One method is establishing clear definitions around AI agents and AI workflows. Organizations will need to map out exactly where AI will integrate into workflows or introduce new ones.

Another development to watch is governance. In late 2025, the Linux Foundation announced the creation of the Agentic AI Foundation, signaling an effort to establish shared standards and best practices. If successful, it could play a role like the World Wide Web Consortium in shaping an open, interoperable agent ecosystem.

There is also a growing debate over model size. While large, general-purpose models dominate headlines, smaller and more specialized models are often better suited to specific tasks. As agents become configurable consumer and business tools, whether through browsers or workflow management software, the power to choose the right model increasingly shifts to users rather than labs or corporations.

The challenges ahead

Despite the optimism, significant socio-technical challenges remain. Expanding data center infrastructure strains energy grids and affects local communities. In workplaces, agents raise concerns about automation, job displacement and surveillance.

From a security perspective, connecting models to tools and stacking agents together multiplies risks that are already unresolved in standalone large language models. Specifically, AI practitioners are addressing the dangers of indirect prompt injections, where prompts are hidden in open web spaces that are readable by AI agents and result in harmful or unintended actions.

Regulation is another unresolved issue. Compared with Europe and China, the United States has relatively limited oversight of algorithmic systems. As AI agents become embedded across digital life, questions about access, accountability and limits remain largely unanswered.

Meeting these challenges will require more than technical breakthroughs. It demands rigorous engineering practices, careful design and clear documentation of how systems work and fail. Only by treating AI agents as socio-technical systems rather than mere software components, I believe, can we build an AI ecosystem that is both innovative and safe.The Conversation

About the Author:

Thomas Şerban von Davier, Affiliated Faculty Member, Carnegie Mellon Institute for Strategy and Technology, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

A backlash against AI imagery in ads may have begun as brands promote ‘human-made’

By Paul Harrison, Deakin University 

In a wave of new ads, brands like Heineken, Polaroid and Cadbury have started hating on artificial intelligence (AI), celebrating their work as “human-made”.

But in these advertising campaigns on TV, billboards on New York streets and on social media, the companies are signalling something larger.

Even Apple’s new series release, Pluribus, includes the phrase “Made by Humans” in the closing credits.

Other brands including H&M and Guess have faced a backlash for using AI brand ambassadors instead of humans.

These gestures suggest we have reached a cultural moment in the evolution of this technology, where people are unsure what creativity means when machines can now produce much of what we see, hear and perhaps even be moved by.

This feels like efficiency – for executives

At a surface level, AI offers efficiencies such as faster production, cheaper visuals, instant personalisation, and automated decisions. Government and business have rushed toward it, drawn by promises of productivity and innovation. And there is no doubt that this promise is deeply seductive. Indeed, efficiency is what AI excels at.

In the context of marketing and advertising, this “promise”, at least at face value, seems to translate to smaller marketing budgets, better targeting, automated decisions (including by chatbots) and rapid deployment of ad campaigns.

For executives, this is exciting and feels like real progress, with cheaper, faster and more measurable brand campaigns.

But advertising has never really just been about efficiency. It has always relied on a degree of emotional truth and creative mystery. That psychological anchor – a belief that human intention sits behind what we are looking at – turns out to matter more than we like to admit.

Turns out, people care about authenticity

Indeed, people often value objects more when they believe those objects carry traces of a person’s intention or history. This is the case even when those images don’t differ in any material way from a computer-generated image.

To some degree, this signals consumers are sensitive to the presence of a human creator, because when visually compelling computer-generated images are labelled as machine-made, people tend to rate them less favourably.

Indeed, when the same paintings are randomly labelled as either “human created” or “AI created”, people consistently judge the works they believe to be “human created” as more beautiful, meaningful and profound.

It seems the simple presence of an AI label reduces the perceived creativity and value.

A betrayal of creativity

However, there is an important caveat here. These studies rely on people being told who made the work. The effect is a result of attribution, not perception. And so this limitation points towards a deeper problem.

If evaluations change purely because people believe a work was machine made, the response is not about quality, it is about meaning. It reflects a belief that creativity is tied to intention, effort and expression. These are qualities an algorithm doesn’t possess, even when it creates something visually persuasive. In other words, the label carries emotional weight.

There are, of course, obvious examples of when AI goes comedically wrong. In early 2024, the Queensland Symphony Orchestra promoted its brand using a very strange AI-generated image most people instantly recognised as unnatural. Part of the backlash, along with the unsettling weirdness of the image, was the perception an arts organisation was betraying human creativity.

But as AI systems improve, people often struggle to distinguish synthetic from real. Indeed, AI generated faces are judged by many to be just as real, and sometimes more trustworthy, than actual photographs.

Research shows people overestimate their ability to detect deepfakes, and often mistake deepfake videos as authentic.

Although we can see emerging patterns here, the empirical research in this area is being outpaced by AI’s evolving capabilities. So we are often trying to understand psychological responses to a technology that has already evolved since the research took place.

As AI becomes more sophisticated, the boundary between human and machine-made creativity will become harder to perceive. Commerce may not be particularly troubled by this. If the output performs well, the question of origin become secondary.

Why we value creativity

But creative work has never been only about generating content. It is a way for people to express emotion, experience, memory, dissent and interpretation.

And perhaps this is why the rise of “Made by Humans” actually matters. Marketers are not simply selling provenance, they are responding to a deeper cultural anxiety about authorship in a moment when the boundaries of creativity are becoming harder to perceive.

Indeed, one could argue there is an ironic tension here. Marketing is one of the professions most exposed to being superseded by the same technology marketers are now trying to differentiate themselves from.

So whether these human-made claims are a commercial tactic or a sincere defence of creative intention, there is significantly more at stake than just another way to drive sales.The Conversation

About the Author:

Paul Harrison, Director, Master of Business Administration Program (MBA); Co-Director, Better Consumption Lab, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Yes, there is an AI investment bubble – here are three scenarios for how it could end

By Sergi Basco

Booms and busts are a recurring feature of modern economics, but when an asset’s value becomes overinflated, a boom quickly becomes a bubble.

The two most recent major bubble episodes were the dot-com bubble in the United States (1996-2000) and the housing bubbles that emerged around 2006 in different countries. Both ended in recession – the former relatively mild, and the latter catastrophically bad. Recent, dizzying increases in the stock prices of AI-related companies have now got many investors asking “are we witnessing another asset price bubble?”

It is important to put the current AI boom in context. The stock price of Nvidia – which manufactures many of the computer chips that power the AI industry – has multiplied by 13 since the start of 2023. Stocks in other AI-related companies like Microsoft and Google’s parent company Alphabet have multiplied by 2.1 and 3.2, respectively. In comparison, the S&P 500, which tracks the stocks of the most important US firms, has multiplied by just 1.8 in the same period.

It is important to emphasise that these AI-related companies are included in the S&P 500, making the difference with non-AI companies even larger. Accordingly, it seems that there is an AI-bubble – but it won’t necessarily end in a repeat of 2008.

How a bubble forms

The price of any stock can be broken down into two components: its fundamental value, and the inflated bubble value. If the stock’s price is above its fundamental value, there is a bubble in its price.

The fundamental value of an asset is the discounted sum of its expected future dividends. The key word here is “expected”. Given that no one, not even ChatGPT, can predict the future, the fundamental value depends on the subjective expectations of each investor. They might be optimistic or pessimistic; in time, some will be proven right, and others wrong.

Optimistic investors expect that AI will change the world, and that the owners of this technology will make (almost) infinite profits. Not knowing which company will emerge victorious, they invest in all AI-related companies.

In contrast, pessimistic investors think that AI is just sophisticated software, as opposed to truly groundbreaking technology, and they will see bubbles everywhere.

A third possibility is the more sophisticated investors. These are people that think – or know – that there is a bubble, but keep investing in the hope of being able to ride the wave and get off before it is too late.

The last of these possibilities is reminiscent of the infamous quote from Citigroup CEO Chuck Prince before the 2008 housing bubble burst: “as long as the music is playing, you’ve got to get up and dance”.

As an economist, I can say safely that it is impossible for all AI-related companies to end up dominating the market. This means, beyond a doubt, that the value of at least some AI-related stocks have a large bubble component.

A shortage of assets

Asset price bubbles can be the market’s natural response to a shortage of assets. In a moment when the demand for assets exceeds the supply (especially for safe assets like government bonds), there is room for other, newer assets to emerge.

This pattern explains the emergence of, for example, the 1990s dot-com bubble and the subsequent 2000s housing bubble. In that context, the growing role of China in financial markets increased the demand for assets in the West – the money first went to dot-com companies in the 1990s and, when that bubble burst, to fund housing via mortgage-backed securities.

In today’s context, a combination of factors have paved the way for the AI bubble: excitement around new technology, low interest rates (another sign of shortage of assets) and huge amounts of of cash flowing into large corporations.

The bubble bursts: good, bad and ugly scenarios

At the very least, part of the soaring value of AI-related stocks is a bubble – and a bubble cannot stay inflated forever. It has to either burst on its own, or, ideally, be carefully deflated through targeted government or Central Bank measures. The current AI bubble could end in one of three scenarios: good, bad, or ugly.

Good: boom not bubble

During the dot-com bubble, many bad firms received too much money – the classic example was Pets.com. But the bubble also provided financing to companies like Google, which (arguably) contributed to making the internet a productivity-enhancing technology.

Something similar may happen with AI, as the current flurry of investment could, in the long run, create something good: technology that benefits humanity, and eventually yields return on investment. Without bubble-levels of cashflow, it would not be funded.

In this optimistic scenario I am assuming that AI, even though it may displace some jobs in the short term (as most technology does), will turn out to be good for workers. I am also assuming that it, obviously, won’t lead to the extinction of humanity. For this to be the case, governments need to introduce proper, robust regulations. It is also important to emphasise that there is no need for countries to invent or invest in new technologies – they must adapt them and provide applications to make them useful.

Bad: a gentle burst

All bubbles eventually burst. As things stand, we do not know when this will happen, nor the extent of the potential damage, but there will probably be a market correction when enough investors realise that multiple companies are overvalued. This decline in the stock market is bound to cause a recession.

Hopefully, it will be short-lived like the 2001 recession that followed the burst of the dot-com bubble. While no recession is painless, this one was relatively mild, and lasted less than one year in the US.

However, the burst of the AI bubble may be more painful because more households participate (either directly or indirectly via mutual funds) in the stock market than 20 years ago.

Even though the job of Central Banks is not to control asset prices, they may need to consider raising interest rates to deflate the bubble before it gets too large. The more sudden the crash, the deeper and costlier any ensuing recession will be.

Ugly: crash and burn

The burst of the AI-bubble would be ugly if it shares more features than we imagine with the 2000s housing bubble. On the positive side, AI stocks are not houses. This is good because when housing bubbles burst, the impacts on the economy are larger and longer-lasting than with other assets.

The housing bubble alone did not cause the 2008 financial crisis – it also caused the global financial system to collapse. Another reason to be optimistic is that the role of commercial banks in AI finance is much smaller than in housing – a vast amount of every bank’s money is perpetually tied up in mortgages.

However, one important caveat is that we do not how the financial system will react if these huge AI companies default on their debt. Alarmingly, this seems to be how they are currently financing new investments – a recent Bank of America analysis warned that large tech companies are relying heavily on debt to build new data centres, many of which are to cover demand that doesn’t actually exist yet.


A weekly e-mail in English featuring expertise from scholars and researchers. It provides an introduction to the diversity of research coming out of the continent and considers some of the key issues facing European countries. Get the newsletter!The Conversation


Sergi Basco, Profesor Agregado de Economia, Universitat de Barcelona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Energy Co. to Combine With Semiconductor Co. to Create AI Infrastructure

Source: Streetwise Reports (10/10/25)

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) says it has signed a non-binding Letter of Intent (LOI) for a proposed all-stock business combination with Smartkem Inc. (SMTK:Nasdaq). Find out the terms of the proposed merger.

Energy innovation company Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA) announced it has signed a non-binding Letter of Intent (LOI) dated October 6, 2025, with Smartkem Inc. (SMTK:Nasdaq), a company pioneering a new class of organic semiconductor technology, for a proposed all-stock business combination, according to a release.

If finalized, the Proposed Transaction would create a Nasdaq-listed, U.S.-owned and controlled artificial intelligence (AI) infrastructure company, merging low-cost domestic energy with advanced semiconductor packaging and materials to meet the rising demand for AI compute capacity.

JEV said it is strategically positioned at the crossroads of energy and AI, utilizing its robust energy framework and renewable innovations to provide reliable, cost-effective power for AI data centers.

The proposed transaction aims to integrate Smartkem’s patented organic semiconductor platform into Jericho’s infrastructure to accelerate: energy-efficient AI data centers designed for next-generation workloads, advanced AI chip packaging that minimizes power consumption and heat, low-power optical data transmission for faster interconnects, and conformable sensors for environmental monitoring and operational resilience, Jericho noted in the release.

“AI compute growth is driving unprecedented demand for U.S. power and infrastructure,” Jericho Chief Executive Officer Brian Williamson said. “By combining JEV’s scalable energy platform with Smartkem’s semiconductor breakthroughs, we can deliver a new generation of faster, efficient, and more resilient AI data centers.”

Ian Jenks, chairman and CEO of Smartkem, added, “This proposed transaction positions Smartkem’s technology at the center of the largest technology build-out of our era. We believe this combination provides the pathway for our patented materials to reach their full commercial potential inside next-generation AI infrastructure.”

“Together, JEV and Smartkem are developing a unified U.S. platform for AI data centers that pairs energy resilience with advanced semiconductors, a vertically integrated strategy aimed at driving sustainable growth and creating value for shareholders,” said Anthony Amato, strategic advisor to Smartkem.

According to Jericho, some highlights of the proposed transaction include establishing a fully integrated platform covering energy supply and AI data center infrastructure and positioning the combined company to capitalize on the forecasted growth in U.S. power demand for AI data centers.

The combination of JEV’s scalable energy and infrastructure expertise with Smartkem’s patented organic semiconductor materials and OTFT technologies will drive innovation and enhance data center efficiency, JEV said.

The transaction “ensures strategic technology assets are developed, deployed, and scaled under U.S. ownership for global AI infrastructure partners,” the release said.

It also combines two experience management teams “focused on commercializing disruptive innovations at scale.”

Terms of the Proposed Transaction

Under the LOI, the proposed transaction is structured as an all-stock business combination, executed through either a share exchange or statutory merger, Jericho said. In this arrangement, Smartkem would be the surviving legal entity and continue as a publicly listed company on The Nasdaq Stock Market, becoming the “combined company.”

Upon closing, Jericho stockholders would own 65%, while Smartkem stockholders, prior to the transaction, would own 35% of the fully diluted equity securities of the combined company, subject to certain adjustments.

Brian Williamson, currently the CEO of Jericho, would assume the role of CEO for the combined company, according to the release. The board of directors would be reconstituted to include a majority of members designated by Jericho, in compliance with Nasdaq and SEC requirements.

Both companies will require significant additional capital to negotiate the proposed transaction, obtain necessary stockholder approvals, and complete the transaction. Closing is contingent on several conditions, including negotiating a definitive agreement, satisfactory due diligence, board and stockholder approvals, and Nasdaq’s approval for continued listing.

Smartkem and Jericho have agreed to a 60-day exclusivity period to negotiate the terms of a definitive agreement. This period can be terminated by either party under certain conditions, including if Smartkem does not purchase Jericho common shares valued at least US$500,000 by November 30, 2025. While the LOI is active, Smartkem will purchase Jericho common shares from treasury, subject to certain conditions.

The transaction terms outlined in the LOI are expected to be replaced by a definitive agreement. The final legal structure may be adjusted based on tax, corporate, securities, and accounting considerations.

About Smartkem

Smartkem is revolutionizing electronics with a new class of transistors developed using its proprietary semiconductor materials, Jericho said in the release. Its TRUFLEX® semiconductor polymers enable low-temperature printing processes compatible with existing manufacturing infrastructure, delivering low-cost, high-performance displays. The platform is applicable in various display technologies, including MicroLED, LCD, and AMOLED, as well as advanced computer and AI chip packaging, sensors, and logic.

Smartkem designs and develops its materials at its R&D facility in Manchester, U.K., and offers prototyping services at the Centre for Process Innovation (CPI) in Sedgefield, U.K. It also operates a field application office in Hsinchu, Taiwan, near its collaboration partner, The Industrial Technology Research Institute (ITRI).

Smartkem is developing a commercial-scale production process and Electronic Design Automation (EDA) tools to demonstrate the commercial viability of manufacturing a new generation of displays using its materials.

The company holds an extensive IP portfolio, including 140 granted patents across 17 patent families, 14 pending patents, and 40 codified trade secrets. For more information, visit the Smartkem website or follow them on LinkedIn.

JEV’s Data Center Initiative

Earlier this year, Jericho launched its data center initiative, strategically leveraging its expansive 41,000-acre portfolio of active oil and gas joint venture properties in Oklahoma. By harnessing abundant, low-cost on-site natural gas, JEV is transforming its energy assets into secure, scalable, high-performance AI computing hubs tailored for the AI era.

JEV’s build-to-suit (BTS) data centers capitalize on the company’s extensive network of over 60 miles of gas, power, and water infrastructure, along with prime positioning on a U.S. fiber “superhighway,” to offer unparalleled connectivity and performance.

In July, Jericho announced a memorandum of understanding (MOU) with M2 Development Solutions LLC to accelerate the development of AI data centers across the United States. Finalized on July 6, the agreement expands Jericho’s reach beyond its Oklahoma asset base into Ohio and Nevada, utilizing M2’s large-scale development sites.

The Ohio location spans 400 acres and includes access to utility power and on-site natural gas power generation assets. In Nevada, the 3,700-acre site offers a diverse energy mix, including utility power access, on-site geothermal and solar capabilities, and natural gas-fed power generation. These features provide energy diversification options at a scale suitable for AI data center operations, which demand substantial and reliable power sources.

“Our partnership with M2 is a transformative step in executing our AI data center strategy,” said Williamson at the time. “Integrating M2’s gigawatt-scale sites accelerates our ability to deliver scalable, energy-efficient infrastructure for modern AI workloads.”

The Catalyst: We’re Consuming More Electricity Than Ever

In a significant shift from nearly two decades of stagnant U.S. load growth, Americans are now consuming more electricity than ever, according to a report by ICF International. The rapid expansion of data centers to support AI technology, along with a surge in new manufacturing and oil and gas production, is driving a notable increase in industrial electricity demand.

Additionally, electric vehicles, heat pumps, and other energy-intensive products are further contributing to this growth. ICF’s analysis suggests that U.S. electricity demand is expected to rise by 25% by 2030 and by 78% by 2050, compared to 2023 levels. This surge in demand has significant implications for the reliability and affordability of electricity. For residential customers, electricity rates could increase by 15% to 40% by 2030, depending on the market. By 2050, some rates might even double.

Streetwise Ownership Overview*

Jericho Energy Ventures Inc. (JEV:TSX.V; JROOF:OTC; JLM:FRA)


In a piece for U.S. Global Investors dated July 25, Frank Holmes compared the current AI advancements to the scale and ambition of the defense expansion during the Reagan era or the shale boom of the 2010s.

According to Grand View Research, the global data center market size was estimated at US$347.6 billion in 2024 and is projected to reach US$652.01 billion by 2030, growing at a compound annual growth rate (CAGR) of 11.2% from 2025 to 2030. “The rapid adoption of digital transformation initiatives, cloud computing, and emerging technologies such as artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) have substantially increased demand,” Holmes noted.

Ownership and Share Structure

Around 41% of Jericho’s shares are held by management and insiders, the company said. They include CEO Brian Williamson, who owns 1.38%; founder Allen Wilson, who owns 0.99%; and board member Nicholas Baxter, who owns 0.49%; according to Refinitiv’s latest research.

Around 34% of shares are held by the company’s “Top 10 external shareholders.” The rest is in retail.

JEV’s market cap is CA$35.07 million, and it trades in a 52-week range of CA$0.08 and CA$0.21. It has 304.03 million shares outstanding, about 220.98 million floating.

 

Important Disclosures:

  1. As of the date of this article, officers and/or employees of Streetwise Reports LLC (including members of their household) own securities of Jericho Energy Ventures Inc.
  2. Steve Sobek wrote this article for Streetwise Reports LLC and provides services to Streetwise Reports as an employee.
  3. This article does not constitute investment advice and is not a solicitation for any investment. Streetwise Reports does not render general or specific investment advice and the information on Streetwise Reports should not be considered a recommendation to buy or sell any security. Each reader is encouraged to consult with his or her personal financial adviser and perform their own comprehensive investment research. By opening this page, each reader accepts and agrees to Streetwise Reports’ terms of use and full legal disclaimer. Streetwise Reports does not endorse or recommend the business, products, services or securities of any company.

For additional disclosures, please click here.

Today’s AI hype has echoes of a devastating technology boom and bust 100 years ago

By Cameron Shackell, Queensland University of Technology 

The electrification boom of the 1920s set the United States up for a century of industrial dominance and powered a global economic revolution.

But before electricity faded from a red-hot tech sector into invisible infrastructure, the world went through profound social change, a speculative bubble, a stock market crash, mass unemployment and a decade of global turmoil.

Understanding this history matters now. Artificial intelligence (AI) is a similar general purpose technology and looks set to reshape every aspect of the economy. But it’s already showing some of the hallmarks of electricity’s rise, peak and bust in the decade known as the Roaring Twenties.

The reckoning that followed could be about to repeat.

A crowd gathers outside the New York Stock Exchange following the ‘Great Crash’ of October 1929.
New York World-Telegram and the Sun Newspaper Photograph Collection, US Library of Congress

First came the electricity boom

A century ago, when people at the New York Stock Exchange talked about the latest “high tech” investments, they were talking about electricity.

Investors poured money into suppliers such as Electric Bond & Share and Commonwealth Edison, as well as companies using electricity in new ways, such as General Electric (for appliances), AT&T (telecommunications) and RCA (radio).

It wasn’t a hard sell. Electricity brought modern movies, new magazines from faster printing presses, and evenings by the radio.

It was also an obvious economic game changer, promising automation, higher productivity, and a future full of leisure and consumption. In 1920, even Soviet revolutionary leader Vladimir Lenin declared: “Communism is Soviet power plus the electrification of the whole country.”

Today, a similar global urgency grips both communist and capitalist countries about AI, not least because of military applications.

Then came the peak

Like AI stocks now, electricity stocks “became favorites in the boom even though their fundamentals were difficult to assess”.

Market power was concentrated. Big players used complex holding structures to dodge rules and sell shares in basically the same companies to the public under different names.

US finance professor Harold Bierman, who argued that attempts to regulate overpriced utility stocks were a direct trigger for the crash, estimated that utilities made up 18% of the New York Stock Exchange in September 1929. Within electricity supply, 80% of the market was owned by just a handful of holding firms.

But that’s just the utilities. As today with AI, there was a much larger ecosystem.

Almost every 1920s “megacap” (the largest companies at the time) owed something to electrification. General Motors, for example, had overtaken Ford using new electric production techniques.

Essentially, electricity became the backdrop to the market in the same way AI is doing, as businesses work to become “AI-enabled”.

No wonder that today tech giants command over a third of the S&P 500 index and nearly three-quarters of the NASDAQ. Transformative technology drives not only economic growth, but also extreme market concentration.

In 1929, to reflect the new sector’s importance, Dow Jones launched the last of its three great stock averages: the electricity-heavy Dow Jones Utilities Average.

But then came the bust

The Dow Jones Utilities Average went as high as 144 in 1929. But by 1934, it had collapsed to just 17.

No single cause explains the New York Stock Exchange’s unprecedented “Great Crash”, which began on October 24 1929 and preceded the worldwide Great Depression.

That crash triggered a banking crisis, credit collapse, business failures, and a drastic fall in production. Unemployment soared from just 3% to 25% of US workers by 1933 and stayed in double figures until the US entered the second world war in 1941.

Lithograph of Wall Street, New York City, with panicked crowd, lightning, people jumping out of buildings, buildings falling, at time of stock market crash in 1929.
Lithograph of Wall Street, New York City, after the 1929 stock market crash. Jame Rosenberg, Ben and Beatrice Goldstein Foundation collection, US Library of Congress

The ripple effects were global, with most countries seeing a rise in unemployment, especially in countries reliant on international trade, such as Chile, Australia and Canada, as well as Germany.

The promised age of shorter hours and electric leisure turned into soup kitchens and bread lines.

The collapse exposed fraud and excess. Electricity entrepreneur Samuel Insull, once Thomas Edison’s protégé and builder of Chicago’s Commonwealth Edison, was at one point worth US$150 million – an even more staggering amount at the time.

But after Insull’s empire went bankrupt in 1932, he was indicted for embezzlement and larceny. He fled overseas, was brought back, and eventually acquitted – but 600,000 shareholders and 500,000 bondholders lost everything.

However, to some Insull seemed less a criminal mastermind than a scapegoat for a system whose flaws ran far deeper.

Reforms unthinkable during the boom years followed.

The Public Utility Holding Company Act of 1935 broke up the huge holding company structures and imposed regional separation. Once exciting electricity darlings became boring regulated infrastructure: a fact reflected in the humble “Electric Company” square on the original 1935 Monopoly board.

Lessons from the 1920s for today

AI is rolling out faster than even those seeking to use it for business or government policy can sometimes manage properly.

Like electricity a century ago, a few interconnected firms are building today’s AI infrastructure.

And like a century ago, investors are piling in – though many don’t know the extent of their exposure through their superannuation funds or exchange traded funds (ETFs).

Just as in the late 1920s, today’s regulation of AI is still loose in many parts of the world – though the European Union is taking a tougher approach with its world-first AI law.

US President Donald Trump has taken the opposite approach, actively cutting “onerous regulation” of AI. Some US states have responded by taking action themselves. The courts, when consulted, are hamstrung by laws and definitions written for a different era.

Can we transition to AI being invisible infrastructure like electricity without a another bust, only then followed by reform?

If the parallels to the electrification boom remain unnoticed, the chances are slim.The Conversation

About the Author:

Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Book Review: Hands-On AI Trading with Python, QuantConnect and AWS

AI is all the rage these days. We know this! But as investors and traders, do we know how to incorporate AI into our systems? Do we even know the many possible ways we could use AI to help our trading? Well, today I am going to do bring something a little bit different to the blog, a quick book review!

As a Python coder, automated trader and investor, I feel constantly bombarded with bits and pieces of AI trading information from newsletters or ‘how to’ tutorials to implement this or that. Luckily, I was recently given a complimentary copy of Hands-On AI Trading with Python, QuantConnect and AWS and, it turns out, this book is a comprehensive guide that brings a whole lot of information into one place with a consistent presentation and coding style.

Front cover of Hands-On AI Trading

Basic Information:

This book was written by five active data-driven market professionals that all run businesses or have positions that are aligned to the financial markets and/or using AI and automated solutions. Jiri Pik is the CEO of RocketEdge.com, Jared Broad is the founder and CEO of QuantConnect, Ernest Chan is the founder of PredictNow.AI, Philip Sun is the CEO of Adaptive Investment Solutions and Vivek Singh previously worked at a hedge fund and is now a senior product manager at AWS.

This book is targeted towards those in finance, aspiring quants, veteran quants, hedge fund traders, as well as independent traders & investors. As you can tell from the book’s title, there’s a focus on using the Python programming language as well as the services of QuantConnect, Amazon Web Services (AWS), and Predictnow.ai.

The authors present these specific tools (QuantConnect, AWS, Predictnow.ai) as a tech-stack to get things from start to finish. As stated in the book, the goal was to provide, “an easy-to-setup and use environment where readers could instantly experiment with the algorithms to build their confidence without spending any time setting up the required infrastructure.” In other words, the reader has an opportunity to go from the learning, creating and testing phase (with code and AI models) to potentially working through to a live strategy trading (through QuantConnect and their connected brokers).

I found the book to be well organized and it is structured into 3 main parts.

Part 1 is about the Capital Markets and Quantitative Trading.

Part one quickly brings those unfamiliar with the financial markets up to speed. It covers various topics from the different types of markets traded to the mechanics of how things work in the market ecosystem. This includes all the different types of participants, the different roles they play, the different types of orders these traders use as well as who has unique types of informed access. The authors go further through derivatives, futures, charting, crypto and more.

The quantitative analysis and trading part of this section brings a comprehensive overview of quantitative trader functions using QuantConnect and Python code. It details the steps, processes, and aspects that quants will go through, experience and need to consider for a successful process. I think this section will be very beneficial for aspiring and seasoned quant traders alike, as this book does a great job of laying out the market framework and the quantitative trading landscape.

ai python trading image

Image from example in Hands-On AI Trading.

Part 2 goes into AI and Machine Learning (ML) in Algorithmic Trading.

Part two focuses on AI-based algorithmic trading. Here, you start to address the market prediction, forecasting or other specific problems you’re trying to solve. You proceed step by step, breaking down issues and finding solutions using AI and machine learning processes. It details the data set preparation, handling data, creating features, and splitting datasets into training and testing phases.

If you are unfamiliar with AI models – this section (especially Chapter 4) is for you as it delves into models like linear regression, Markov, Bayes, decision trees, support vector machines, neural networks, and many more. Found alongside these characteristics and concepts is the Python code you can use for these different types of quant functions.

Part 3 delves into Advanced Applications of AI in Trading and Risk Management.

Finally, part three discusses using these AI models in real trading and investing scenarios. The authors provide 19 specific examples and this is where I think the main strength of this book lies. These examples illustrate different aspects of the investment game or problems that are solved using various AI models for major financial markets (FX, stocks, etc.). These examples, once understood, ideally can form the basis for many new ideas, as well as just understanding how these pros go about it. Also, the Python code is included for these examples.

For instance, one of my favorite examples (#8) was just a simple exercise in using a stop-loss based on historical volatility (and drawdown recovery). This example used a LASSO regression model with features including the VIX, Average True Range (of n months) and Standard Deviation (of n months). The example used a few different methods to test variations of a dynamic stop-loss order to varying degrees of success. This type of example represents a common problem most traders come into when working through their strategies.

The examples also give interesting ideas on how to use AI and models in use cases beyond just trying to predict future price returns.

Overall Takeaway: 

I thought this book was well done and is the best book that bridges quant trading and AI together that I have read so far. I think a lot of the AI and machine learning aspects were explained and guided in a clear, concise, and a well-organized way, since it’s very easy to get lost in the weeds with this subject.

The breadth of coverage among these many strategies, concepts, and factors involved is admirable, covering all the way from data acquisition and programming to the role of generative AI. There’s a lot to unpack. There’s a lot to learn. I think it’s a testament to the authors that they created a book that covers so much. There’s also a github repository for the examples.

I would recommend this book for any aspiring quant traders or programmers, or anyone who is interested in the understanding of these markets, especially in how quant trading and AI intersect. I would also recommend it for traders looking for examples of AI in trading or finding new ideas to implement AI strategies.

Disclaimer: Complimentary book copy was provided by Wiley.


Article written by Zac@InvestMacro

 

AI is transforming weather forecasting − and that could be a game changer for farmers around the world

By Paul Winters, University of Notre Dame and Amir Jina, University of Chicago 

For farmers, every planting decision carries risks, and many of those risks are increasing with climate change. One of the most consequential is weather, which can damage crop yields and livelihoods. A delayed monsoon, for example, can force a rice farmer in South Asia to replant or switch crops altogether, losing both time and income.

Access to reliable, timely weather forecasts can help farmers prepare for the weeks ahead, find the best time to plant or determine how much fertilizer will be needed, resulting in better crop yields and lower costs.

Yet, in many low- and middle-income countries, accurate weather forecasts remain out of reach, limited by the high technology costs and infrastructure demands of traditional forecasting models.

A new wave of AI-powered weather forecasting models has the potential to change that.

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.
A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.The Conversation

About the Author:

Paul Winters, Professor of Sustainable Development, University of Notre Dame and Amir Jina, Assistant Professor of Public Policy, University of Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.