Archive for Programming

When AI goes shopping: AI agents promise to lighten your purchasing load − if they can earn your trust

By Tamilla Triantoro, Quinnipiac University 

Online shopping often involves endless options and fleeting discounts. A single search for running shoes can yield hundreds of results across multiple platforms, each promising the “best deal.” The holiday season brings excitement, but it also brings a blend of decision fatigue and logistical nightmares.

What if there were a tool capable of hunting for the best prices, navigating endless sales and making sure your purchases arrive on time?

The next evolution in artificial intelligence is AI agents that are capable of autonomous reasoning and multistep problem-solving. AI shopping agents not only suggest what you might like, but they can also act on your behalf. Major retailers and AI companies are developing AI shopping assistants, and the AI company Perplexity released Buy with Pro on Nov. 18, 2024.

Picture this: You prompt AI to find a winter coat under $200 that’s highly rated and will arrive by Sunday. In seconds, it scans websites, compares prices, checks reviews, confirms availability and places the order, all while you go about your day.

image of a webpage showing two small photos of women's coats
Perpelexity’s recently released AI shopping agent can search for items across the web using multiple free-form variables sucgh as color, size, price and shipping time.
Screenshot by Tamilla Triantoro

Unlike traditional recommendation engines, AI agents learn your preferences and handle tasks autonomously. The agents are built with machine learning and natural language processing. They learn from their interactions with the people using them and become smarter and more efficient over time from their collective interactions.

Looking ahead, AI agents are likely to not only master personal shopping needs but also negotiate directly with corporate AI systems. They will not only learn your preferences but will likely be able to book tailored experiences, handle payments across platforms and coordinate schedules.

As a researcher who studies human-AI collaboration, I see how AI agents could make the future of shopping virtually effortless and more personalized than ever.

How AI agents help shoppers

Marketplaces such as Amazon and Walmart have been using AI to automate shopping. Google Lens offers a visual search tool for finding products.

Perplexity’s Buy with Pro is a more powerful AI shopping agent. By providing your shipping and billing information, you can place orders directly on the Perplexity app with free shipping on every order. The shopping assistant is part of the company’s Perplexity Pro service, which has free and paid tiers.

For those looking to build custom AI shopping agents, AutoGPT and AgentGPT are open-source tools for configuring and deploying AI agents.

Consumers today are focused on value, looking for deals and comparing prices across platforms. Having an assistant perform these tasks could be a tremendous time saver. But can AI truly learn your preferences?

A recent study using the GPT-4o model achieved 85% accuracy in imitating the thoughts and behaviors of over 1,000 people after they interacted with the AI for just two hours. This breakthrough finding suggests that digital personas can understand and act on people’s preferences in ways that will transform the shopping experience.

How AI shopping reshapes business

AI agents are moving beyond recommendations to autonomously executing complex tasks such as automating refunds, managing inventory and approving pricing decisions. This evolution has already begun to reshape how businesses operate and how consumers interact with them.

Retailers using AI agents are seeing measurable benefits. Since October 2024, data from the Salesforce shopping index reveals that digital retailers using generative AI achieved a 7% increase in average order revenue and attributed 17% of global orders to AI-driven personalized recommendations, targeted promotions and improved customer service.

Meanwhile, the nature of search and advertising is undergoing a major shift. Amazon is capturing billions of dollars in ad revenue as shoppers bypass Google to search directly on its platform. Simultaneously, AI-powered search tools such as Perplexity and OpenAI’s web-enabled chat deliver instant, context-aware responses, challenging traditional search engines and forcing advertisers to rethink their strategies.

The outcome of the battle between Big Tech and open-source initiatives to shape the AI ecosystem is also likely to affect how the shopping experience changes.

image of a webpage showing two small photos of insulated travel mugs
Shoppers can have back-and-forth interactions with AI agents.
Screenshot by Tamilla Triantoro

The risks: Privacy, manipulation and dependency

While AI agents offer significant benefits, they also raise critical privacy concerns. AI systems require extensive access to personal data, shopping history and financial information. This level of access increases the risk of misuse and unauthorized sharing.

Manipulation is another issue. AI can be highly persuasive and may be optimized to serve corporate interests over consumer welfare. Such technology can prioritize upselling or nudging shoppers toward higher-margin products under the guise of personalization.

There’s also the risk of dependency. Automating many aspects of shopping could diminish the satisfaction of making choices. Research in human-AI interaction indicates that while AI tools can reduce cognitive load, increased reliance on AI could impair people’s ability to critically evaluate their options.

What’s next?

AI-based shopping is still in its infancy, so how much trust should you place in it?

In our book “Converging Minds,” AI researcher Aleksandra Przegalinska and I argue for a balanced and critical approach to AI adoption, recognizing both its potential and its pitfalls.

As cognitive scientist Gary Marcus points out, AI’s moral limitations stem from technical constraints: Despite efforts to prevent errors, these systems remain imperfect.

This cautious perspective is reflected in the responses from my MBA class. When I asked students whether they were ready to outsource their holiday shopping to AI, the answer was an overwhelming no. Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, has argued that the adoption of AI in everyday life will be gradual, as societal change typically lags behind technological advancement.

Before people are willing to hand over their credit cards and let AI take the reins, businesses will have to ensure that AI systems align with human values and priorities. The promise of AI is vast, but to fulfill that promise I believe that AI will need to be an extension of human intention – not a replacement for it.The Conversation

About the Author:

Tamilla Triantoro, Associate Professor of Business Analytics and Information Systems, Quinnipiac University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI has been a boon for marketing, but the dark side of using algorithms to sell products and brands is little studied

By Lauren Labrecque, University of Rhode Island 

Artificial intelligence is revolutionizing the way companies market their products, enabling them to target consumers in personalized and interactive ways that not long ago seemed like the realm of science fiction.

Marketers use AI-powered algorithms to scour vast amounts of data that reveals individual preferences with unrivaled accuracy. This allows companies to precisely target content – ads, emails, social media posts – that feels tailor-made and helps cultivate companies’ relationships with consumers.

As a researcher who studies technology in marketing, I joined several colleagues in conducting new research that shows AI marketing overwhelmingly neglects its potential negative consequences.

Our peer-reviewed study reviewed 290 articles that had been published over the past 10 years from 15 high-ranking marketing journals. We found that only 33 of them addressed the potential “dark side” of AI marketing.

This matters because the imbalance creates a critical gap in understanding the full impact of AI.

AI marketing can perpetuate harmful stereotypes, such as producing hypersexualized depictions of women, for example. AI can also infringe on the individual rights of artists. And it can spread misinformation through deepfakes and “hallucinations,” which occur when AI presents false information as if it were true, such as inventing historical events.

It can also negatively affect mental health. The prevalence of AI-powered beauty filters on social media, for instance, can foster unrealistic ideals and trigger depression.

These concerns loom large, prompting anxiety about the potential misuse of this powerful technology. Many people experience these worries, but young women are notably vulnerable. As AI apps gain acceptance, beauty standards are moving further from reality.

Our research finds there is an urgent need to address AI’s ethical considerations and potential negative consequences. Our intent is not to discredit AI. It’s to make sure that AI marketing benefits everyone, not just a handful of powerful companies.

I believe researchers should consider exploring the ethical problems with AI more thoroughly, and how to use it safely and responsibly.

This is important because AI is suddenly being used everywhere – from social media to self-driving cars to making health decisions. Understanding its potential negative effects empowers the public to be informed consumers and call for responsible AI use.The Conversation

About the Author:

Lauren Labrecque, Professor of Marketing, University of Rhode Island

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Asking ChatGPT vs Googling: Can AI chatbots boost human creativity?

By Jaeyeon Chung, Rice University 

Think back to a time when you needed a quick answer, maybe for a recipe or a DIY project. A few years ago, most people’s first instinct was to “Google it.” Today, however, many people are more likely to reach for ChatGPT, OpenAI’s conversational AI, which is changing the way people look for information.

Rather than simply providing lists of websites, ChatGPT gives more direct, conversational responses. But can ChatGPT do more than just answer straightforward questions? Can it actually help people be more creative?

I study new technologies and consumer interaction with social media. My colleague Byung Lee and I set out to explore this question: Can ChatGPT genuinely assist people in creatively solving problems, and does it perform better at this than traditional search engines like Google?

Across a series of experiments in a study published in the journal Nature Human Behavour, we found that ChatGPT does boost creativity, especially in everyday, practical tasks. Here’s what we learned about how this technology is changing the way people solve problems, brainstorm ideas and think creatively.

ChatGPT and creative tasks

Imagine you’re searching for a creative gift idea for a teenage niece. Previously, you might have googled “creative gifts for teens” and then browsed articles until something clicked. Now, if you ask ChatGPT, it generates a direct response based on its analysis of patterns across the web. It might suggest a custom DIY project or a unique experience, crafting the idea in real time.

To explore whether ChatGPT surpasses Google in creative thinking tasks, we conducted five experiments where participants tackled various creative tasks. For example, we randomly assigned participants to either use ChatGPT for assistance, use Google search, or generate ideas on their own. Once the ideas were collected, external judges, unaware of the participants’ assigned conditions, rated each idea for creativity. We averaged the judges’ scores to provide an overall creativity rating.

One task involved brainstorming ways to repurpose everyday items, such as turning an old tennis racket and a garden hose into something new. Another asked participants to design an innovative dining table. The goal was to test whether ChatGPT could help people come up with more creative solutions compared with using a web search engine or just their own imagination.

The results were clear: Judges rated ideas generated with ChatGPT’s assistance as more creative than those generated with Google searches or without any assistance. Interestingly, ideas generated with ChatGPT – even without any human modification – scored higher in creativity than those generated with Google.

One notable finding was ChatGPT’s ability to generate incrementally creative ideas: those that improve or build on what already exists. While truly radical ideas might still be challenging for AI, ChatGPT excelled at suggesting practical yet innovative approaches. In the toy-design experiment, for example, participants using ChatGPT came up with imaginative designs, such as turning a leftover fan and a paper bag into a wind-powered craft.

Limits of AI creativity

ChatGPT’s strength lies in its ability to combine unrelated concepts into a cohesive response. Unlike Google, which requires users to sift through links and piece together information, ChatGPT offers an integrated answer that helps users articulate and refine ideas in a polished format. This makes ChatGPT promising as a creativity tool, especially for tasks that connect disparate ideas or generate new concepts.

It’s important to note, however, that ChatGPT doesn’t generate truly novel ideas. It recognizes and combines linguistic patterns from its training data, subsequently generating outputs with the most probable sequences based on its training. If you’re looking for a way to make an existing idea better or adapt it in a new way, ChatGPT can be a helpful resource. For something groundbreaking, though, human ingenuity and imagination are still essential.

Additionally, while ChatGPT can generate creative suggestions, these aren’t always practical or scalable without expert input. Steps such as screening, feasibility checks, fact-checking and market validation require human expertise. Given that ChatGPT’s responses may reflect biases in its training data, people should exercise caution in sensitive contexts such as those involving race or gender.

We also tested whether ChatGPT could assist with tasks often seen as requiring empathy, such as repurposing items cherished by a loved one. Surprisingly, ChatGPT enhanced creativity even in these scenarios, generating ideas that users found relevant and thoughtful. This result challenges the belief that AI cannot assist with emotionally driven tasks.

Future of AI and creativity

As ChatGPT and similar AI tools become more accessible, they open up new possibilities for creative tasks. Whether in the workplace or at home, AI could assist in brainstorming, problem-solving and enhancing creative projects. However, our research also points to the need for caution: While ChatGPT can augment human creativity, it doesn’t replace the unique human capacity for truly radical, out-of-the-box thinking.

This shift from Googling to asking ChatGPT represents more than just a new way to access information. It marks a transformation in how people collaborate with technology to think, create and innovate.The Conversation

About the Author:

Jaeyeon Chung, Assistant Professor of Business, Rice University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Do people trust AI on financial decisions? We found it really depends on who they are

By Gertjan Verdickt, University of Auckland, Waipapa Taumata Rau 

When it comes to investing and planning your financial future, are you more willing to trust a person or a computer?

This isn’t a hypothetical question any more.

Big banks and investment firms are using artificial intelligence (AI) to help make financial predictions and give advice to clients.

Morgan Stanley uses AI to mitigate the potential biases of its financial analysts when it comes to stock market predictions. And one of the world’s biggest investment banks, Goldman Sachs, recently announced it was trialling the use of AI to help write computer code, though the bank declined to say which division it was being used in. Other companies are using AI to predict which stocks might go up or down.

But do people actually trust these AI advisers with their money?

Our new research examines this question. We found it really depends on who you are and your prior knowledge of AI and how it works.

Despite the growing sophistication of artificial intelligence, investors prefer human expertise when it comes to stock market predictions, according to a new study.

Trust differences

To examine the question of trust when it comes to using AI for investment, we asked 3,600 people in the United States to imagine they were getting advice about the stock market.

In these imagined scenarios, some people got advice from human experts. Others got advice from AI. And some got advice from humans working together with AI.

In general, people were less likely to follow advice if they knew AI was involved in making it. They seemed to trust the human experts more.

But the distrust of AI wasn’t universal. Some groups of people were more open to AI advice than others.

For example, women were more likely to trust AI advice than men (by 7.5%). People who knew more about AI were more willing to listen to the advice it provided (by 10.1%). And politics mattered – people who supported the Democratic Party were more open to AI advice than others (by 7.3%).

We also found people were more likely to trust simpler AI methods.

When we told our research participants the AI was using something called “ordinary least squares” (a basic mathematics technique in which a straight line is used to estimate the relationship between two variables), they were more likely to trust it than when we said it was using “deep learning” (a more complex AI method).

This might be because people tend to trust things they understand. Much like how a person might trust a simple calculator more than a complex scientific instrument they have never seen before.

Trust in the future of finance

As AI becomes more common in the financial world, companies will need to find ways to improve levels of trust.

This might involve teaching people more about how the AI systems work, being clear about when and how AI is being used, and finding the right balance between human experts and AI.

Furthermore, we need to tailor how AI advice is presented to different groups of people and show how well AI performs over time compared to human experts.

The future of finance might involve a lot more AI, but only if people learn to trust it. It’s a bit like learning to trust self-driving cars. The technology might be great, but if people don’t feel comfortable using it, it won’t catch on.

Our research shows that building this trust isn’t just about making better AI. It’s about understanding how people think and feel about AI. It’s about bridging the gap between what AI can do and what people believe it can do.

As we move forward, we’ll need to keep studying how people react to AI in finance. We’ll need to find ways to make AI not just a powerful tool, but a trusted advisor that people feel comfortable relying on for important financial decisions.

The world of finance is changing fast, and AI is a big part of that change. But in the end, it’s still people who decide where to put their money. Understanding how to build trust between humans and AI will be key to shaping the future of finance.The Conversation

About the Author:

Gertjan Verdickt, Lecturer, Business School, University of Auckland, Waipapa Taumata Rau

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

How a subfield of physics led to breakthroughs in AI – and from there to this year’s Nobel Prize

By Veera Sundararaghavan, University of Michigan 

John J. Hopfield and Geoffrey E. Hinton received the Nobel Prize in physics on Oct. 8, 2024, for their research on machine learning algorithms and neural networks that help computers learn. Their work has been fundamental in developing neural network theories that underpin generative artificial intelligence.

A neural network is a computational model consisting of layers of interconnected neurons. Like the neurons in your brain, these neurons process and send along a piece of information. Each neural layer receives a piece of data, processes it and passes the result to the next layer. By the end of the sequence, the network has processed and refined the data into something more useful.

While it might seem surprising that Hopfield and Hinton received the physics prize for their contributions to neural networks, used in computer science, their work is deeply rooted in the principles of physics, particularly a subfield called statistical mechanics.

As a computational materials scientist, I was excited to see this area of research recognized with the prize. Hopfield and Hinton’s work has allowed my colleagues and me to study a process called generative learning for materials sciences, a method that is behind many popular technologies like ChatGPT.

What is statistical mechanics?

Statistical mechanics is a branch of physics that uses statistical methods to explain the behavior of systems made up of a large number of particles.

Instead of focusing on individual particles, researchers using statistical mechanics look at the collective behavior of many particles. Seeing how they all act together helps researchers understand the system’s large-scale macroscopic properties like temperature, pressure and magnetization.

For example, physicist Ernst Ising developed a statistical mechanics model for magnetism in the 1920s. Ising imagined magnetism as the collective behavior of atomic spins interacting with their neighbors.

In Ising’s model, there are higher and lower energy states for the system, and the material is more likely to exist in the lowest energy state.

One key idea in statistical mechanics is the Boltzmann distribution, which quantifies how likely a given state is. This distribution describes the probability of a system being in a particular state – like solid, liquid or gas – based on its energy and temperature.

Ising exactly predicted the phase transition of a magnet using the Boltzmann distribution. He figured out the temperature at which the material changed from being magnetic to nonmagnetic.

Phase changes happen at predictable temperatures. Ice melts to water at a specific temperature because the Boltzmann distribution predicts that when it gets warm, the water molecules are more likely to take on a disordered – or liquid – state.

Statistical mechanics tells researchers about the properties of a larger system, and how individual objects in that system act collectively.

In materials, atoms arrange themselves into specific crystal structures that use the lowest amount of energy. When it’s cold, water molecules freeze into ice crystals with low energy states.

Similarly, in biology, proteins fold into low energy shapes, which allow them to function as specific antibodies – like a lock and key – targeting a virus.

Neural networks and statistical mechanics

Fundamentally, all neural networks work on a similar principle – to minimize energy. Neural networks use this principle to solve computing problems.

For example, imagine an image made up of pixels where you only can see a part of the picture. Some pixels are visible, while the rest are hidden. To determine what the image is, you consider all possible ways the hidden pixels could fit together with the visible pieces. From there, you would choose from among what statistical mechanics would say are the most likely states out of all the possible options.

A diagram showing statistical mechanics on the left, with a graph showing three atomic structures, with the one at the lowest energy labeled the most stable. On the right is labeled neural networks, showing two photos of trees, one only half visible.
In statistical mechanics, researchers try to find the most stable physical structure of a material. Neural networks use the same principle to solve complex computing problems.
Veera Sundararaghavan

Hopfield and Hinton developed a theory for neural networks based on the idea of statistical mechanics. Just like Ising before them, who modeled the collective interaction of atomic spins to solve the photo problem with a neural network, Hopfield and Hinton imagined collective interactions of pixels. They represented these pixels as neurons.

Just as in statistical physics, the energy of an image refers to how likely a particular configuration of pixels is. A Hopfield network would solve this problem by finding the lowest energy arrangements of hidden pixels.

However, unlike in statistical mechanics – where the energy is determined by known atomic interactions – neural networks learn these energies from data.

Hinton popularized the development of a technique called backpropagation. This technique helps the model figure out the interaction energies between these neurons, and this algorithm underpins much of modern AI learning.

The Boltzmann machine

Building upon Hopfield’s work, Hinton imagined another neural network, called the Boltzmann machine. It consists of visible neurons, which we can observe, and hidden neurons, which help the network learn complex patterns.

In a Boltzmann machine, you can determine the probability that the picture looks a certain way. To figure out this probability, you can sum up all the possible states the hidden pixels could be in. This gives you the total probability of the visible pixels being in a specific arrangement.

My group has worked on implementing Boltzmann machines in quantum computers for generative learning.

In generative learning, the network learns to generate new data samples that resemble the data the researchers fed the network to train it. For example, it might generate new images of handwritten numbers after being trained on similar images. The network can generate these by sampling from the learned probability distribution.

Generative learning underpins modern AI – it’s what allows the generation of AI art, videos and text.

Hopfield and Hinton have significantly influenced AI research by leveraging tools from statistical physics. Their work draws parallels between how nature determines the physical states of a material and how neural networks predict the likelihood of solutions to complex computer science problems.The Conversation

About the Author:

Veera Sundararaghavan, Professor of Aerospace Engineering, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Teachers feel most productive when they use AI for teaching strategies

By Samantha Keppler, University of Michigan and Clare Snyder, University of Michigan 

Teachers can use generative AI in a variety of ways. They may use it to develop lesson plans and quizzes. Or teachers may rely on a generative AI tool, such as ChatGPT, for insight on how to teach a concept more effectively.

In our new research, only the teachers doing both of those things reported feeling that they were getting more done. They also told us that their teaching was more effective with AI.

Over the course of the 2023-2024 school year, we followed 24 teachers at K-12 schools throughout the United States as they wrestled with whether and how to use generative AI for their work. We gave them a standard training session on generative AI in the fall of 2023. We then conducted multiple observations, interviews and surveys throughout the year.

We found that teachers felt more productive and effective with generative AI when they turned to it for advice. The standard methods to teach to state standards that work for one student, or in one school year, might not work as well in another. Teachers may get stuck and need to try a different approach. Generative AI, it turns out, can be a source of ideas for those alternative approaches.

While many focus on the productivity benefits of how generative AI can help teachers make quizzes or activities faster, our study points to something different. Teachers feel more productive and effective when their students are learning, and generative AI seems to help some teachers get new ideas about how to advance student learning.

Why it matters

K-12 teaching requires creativity, particularly when it comes to tasks such as lesson plans or how to integrate technology into the classroom. Teachers are under pressure to work quickly, however, because they have so many things to do, such as prepare teaching materials, meet with parents and grade students’ schoolwork. Teachers do not have enough time each day to do all of the work that they need to.

We know that such pressure often makes creativity difficult. This can make teachers feel stuck. Some people, in particular AI experts, view generative AI as a solution to this problem; generative AI is always on call, it works quickly, and it never tires.

However, this view assumes that teachers will know how to use generative AI effectively to get the solutions they are seeking. Our research reveals that for many teachers, the time it takes to get a satisfactory output from the technology – and revise it to fit their needs – is no shorter than the time it would take to create the materials from scratch on their own. This is why using generative AI to create materials is not enough to get more done.

By understanding how teachers can effectively use generative AI for advice, schools can make more informed decisions about how to invest in AI for their teachers and how to support teachers in using these new tools. Further, this feeds back to the scientists creating AI tools, who can make better decisions about how to design these systems.

What still isn’t known

Many teachers face roadblocks that prevent them from seeing the benefits of generative AI tools such as ChatGPT. These benefits include being able to create better materials faster. The teachers we talked to, however, were all new users of the technology. Teachers who are more familiar with ways to prompt generative AI – we call them “power users” – might have other ways of interacting with the technology that we did not see. We also do not yet know exactly why some teachers move from being new users to proficient users but others do not.

About the Authors:

The Research Brief is a short take on interesting academic work.The Conversation

Samantha Keppler, Assistant Professor of Technology and Operations, Stephen M. Ross School of Business, University of Michigan and Clare Snyder, PhD Candidate in Business Administration, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Tiny robots and AI algorithms could help to craft material solutions for cleaner environments

By Mahshid Ahmadi, University of Tennessee 

Many human activities release pollutants into the air, water and soil. These harmful chemicals threaten the health of both people and the ecosystem. According to the World Health Organization, air pollution causes an estimated 4.2 million deaths annually.

Scientists are looking into solutions, and one potential avenue is a class of materials called photocatalysts. When triggered by light, these materials undergo chemical reactions that initial studies have shown can break down common toxic pollutants.

I am a materials science and engineering researcher at the University of Tennessee. With the help of robots and artificial intelligence, my colleagues and I are making and testing new photocatalysts with the goal of mitigating air pollution.

Breaking down pollutants

The photocatalysts work by generating charged carriers in the presence of light. These charged carriers are tiny particles that can move around and cause chemical reactions. When they come into contact with water and oxygen in the environment, they produce substances called reactive oxygen species. These highly active reactive oxygen species can bond to parts of the pollutants and then either decompose the pollutants or turn them into harmless – or even useful – products.

A cube-shaped metal machine with a chamber filled with bright light, and a plate of tubes shown going under the light.
To facilitate the photocatalytic reaction, researchers in the Ahmadi lab put plates of perovskite nanocrystals and pollutants under bright light to see whether the reaction breaks down the pollutants.
Astita Dubey

But some materials used in the photocatalytic process have limitations. For example, they can’t start the reaction unless the light has enough energy – infrared rays with lower energy light, or visible light, won’t trigger the reaction.

Another problem is that the charged particles involved in the reaction can recombine too quickly, which means they join back together before finishing the job. In these cases, the pollutants either do not decompose completely or the process takes a long time to accomplish.

Additionally, the surface of these photocatalysts can sometimes change during or after the photocatalytic reaction, which affects how they work and how efficient they are.

To overcome these limitations, scientists on my team are trying to develop new photocatalytic materials that work efficiently to break down pollutants. We also focus on making sure these materials are nontoxic so that our pollution-cleaning materials aren’t causing further pollution.

A plate of tiny tubes, with some colored dark blue, others light blue, and others transparent.
This plate from the Ahmadi lab is used while testing how perovskite nanocrystals and light break down pollutants, like the blue dye shown. The light blue color indicates partial degradation, while transparent water signifies complete degradation.
Astita Dubey

Teeny tiny crystals

Scientists on my team use automated experimentation and artificial intelligence to figure out which photocatalytic materials could be the best candidates to quickly break down pollutants. We’re making and testing materials called hybrid perovskites, which are tiny crystals – they’re about a 10th the thickness of a strand of hair.

These nanocrystals are made of a blend of organic (carbon-based) and inorganic (non-carbon-based) components.

They have a few unique qualities, like their excellent light-absorbing properties, which come from how they’re structured at the atomic level. They’re tiny, but mighty. Optically, they’re amazing too – they interact with light in fascinating ways to generate a large number of tiny charge carriers and trigger photocatalytic reactions.

These materials efficiently transport electrical charges, which allows them to transport light energy and drive the chemical reactions. They’re also used to make solar panels more efficient and in LED lights, which create the vibrant displays you see on TV screens.

There are thousands of potential types of hybrid nanocrystals. So, my team wanted to figure out how to make and test as many as we can quickly, to see which are the best candidates for cleaning up toxic pollutants.

Bringing in robots

Instead of making and testing samples by hand – which takes weeks or months – we’re using smart robots, which can produce and test at least 100 different materials within an hour. These small liquid-handling robots can precisely move, mix and transfer tiny amounts of liquid from one place to another. They’re controlled by a computer that guides their acceleration and accuracy.

A researcher in a white lab coat smiling at the camera next to a fume hood, with plates of small tubes inside it.
The Opentrons pipetting robot helps Astita Dubey, a visiting scientist working with the Ahmadi lab, synthesize materials and treat them with organic pollutants to test whether they can break down the pollutants.
Jordan Marshall

We also use machine learning to guide this process. Machine learning algorithms can analyze test data quickly and then learn from that data for the next set of experiments executed by the robots. These machine learning algorithms can quickly identify patterns and insights in collected data that would normally take much longer for a human eye to catch.

Our approach aims to simplify and better understand complex photocatalytic systems, helping to create new strategies and materials. By using automated experimentation guided by machine learning, we can now make these systems easier to analyze and interpret, overcoming challenges that were difficult with traditional methods.The Conversation

About the Author:

Mahshid Ahmadi, Assistant Professor of Materials Science and Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Quantum information theorists are shedding light on entanglement, one of the spooky mysteries of quantum mechanics

By William Mark Stuckey, Elizabethtown College 

The year 2025 marks the 100th anniversary of the birth of quantum mechanics. In the century since the field’s inception, scientists and engineers have used quantum mechanics to create technologies such as lasers, MRI scanners and computer chips.

Today, researchers are looking toward building quantum computers and ways to securely transfer information using an entirely new sister field called quantum information science.

But despite creating all these breakthrough technologies, physicists and philosophers who study quantum mechanics still haven’t come up with the answers to some big questions raised by the field’s founders. Given recent developments in quantum information science, researchers like me are using quantum information theory to explore new ways of thinking about these unanswered foundational questions. And one direction we’re looking into relates Albert Einstein’s relativity principle to the qubit.

Quantum computers

Quantum information science focuses on building quantum computers based on the quantum “bit” of information, or qubit. The qubit is historically grounded in the discoveries of physicists Max Planck and Einstein. They instigated the development of quantum mechanics in 1900 and 1905, respectively, when they discovered that light exists in discrete, or “quantum,” bundles of energy.

These quanta of energy also come in small forms of matter, such as atoms and electrons, which make up everything in the universe. It is the odd properties of these tiny packets of matter and energy that are responsible for the computational advantages of the qubit.

A computer based on a quantum bit rather than a classical bit could have a significant computing advantage. And that’s because a classical bit produces a binary response – either a 1 or a 0 – to only one query.

In contrast, the qubit produces a binary response to infinitely many queries using the property of quantum superposition. This property allows researchers to connect multiple qubits in what’s called a quantum entangled state. Here, the entangled qubits act collectively in a way that arrays of classical bits cannot.

That means a quantum computer can do some calculations much faster than an ordinary computer. For example, one device reportedly used 76 entangled qubits to solve a sampling problem 100 trillion times faster than a classical computer.

But the exact force or principle of nature responsible for this quantum entangled state that underlies quantum computing is a big unanswered question. A solution that my colleagues and I in quantum information theory have proposed has to do with Einstein’s relativity principle.

Quantum superposition and entanglement allow qubits to contain far more information than classical bits.

Quantum information theory

The relativity principle says that the laws of physics are the same for all observers, regardless of where they are in space, how they’re oriented or how they’re moving relative to each other. My team showed how to use the relativity principle in conjunction with the principles of quantum information theory to account for quantum entangled particles.

Quantum information theorists like me think about quantum mechanics as a theory of information principles rather than a theory of forces. That’s very different than the typical approach to quantum physics, in which force and energy are important concepts for doing the calculations. In contrast, quantum information theorists don’t need to know what sort of physical force might be causing the mysterious behavior of entangled quantum particles.

That gives us an advantage for explaining quantum entanglement because, as physicist John Bell proved in 1964, any explanation for quantum entanglement in terms of forces requires what Einstein called “spooky actions at a distance.”

That’s because the measurement outcomes of the two entangled quantum particles are correlated – even if those measurements are done at the same time and the particles are physically separated by a vast distance. So, if a force is causing quantum entanglement, it would have to act faster than the speed of light. And a faster-than-light force violates Einstein’s theory of special relativity.

Quantum entanglement is important to quantum computing.

Many researchers are trying to find an explanation for quantum entanglement that doesn’t require spooky actions at a distance, like my team’s proposed solution.

Classical and quantum entanglement

In entanglement, you can know something about two particles collectively – call them particle 1 and particle 2 – so that when you measure particle 1, you immediately know something about particle 2.

Imagine you’re mailing two friends, whom physicists typically call Alice and Bob, each one glove from the same pair of gloves. When Alice opens her box and sees a left-hand glove, she’ll know immediately that when Bob opens the other box he will see the right-hand glove. Each box and glove combination produces one of two outcomes, either a right-hand glove or a left-hand glove. There’s only one possible measurement – opening the box – so Alice and Bob have entangled classical bits of information.

But in quantum entanglement the situation involves entangled qubits, which behave very differently than classical bits.

Qubit behavior

Consider a property of electrons called spin. When you measure an electron’s spin using magnets that are oriented vertically, you always get a spin that’s up or down, nothing in between. That’s a binary measurement outcome, so this is a bit of information.

Two diagrams showing electrons passing through magnets. The top diagram shows one on top and one below the electrons' path. The electrons are either deflected up or down, as indicated by the split paths, after passing through the magnet. The bottom diagram shows two magnets, one on the left and one on the right of the electrons' path. The electrons are either deflected left or right, as indicated by the split paths, after passing through the magnet.
Two magnets oriented vertically can measure an electron’s vertical spin. After moving through the magnets, the electron is deflected either up or down. Similarly, two magnets oriented horizontally can measure an electron’s horizontal spin. After moving through the magnets, the electron is deflected either left or right.
Timothy McDevitt

If you turn the magnets on their sides to measure an electron’s spin horizontally, you always get a spin that’s left or right, nothing in between. The vertical and horizontal orientations of the magnets constitute two different measurements of this same bit. So, electron spin is a qubit – it produces a binary response to multiple measurements.

Quantum superposition

Now suppose you first measure an electron’s spin vertically and find it is up, then you measure its spin horizontally. When you stand straight up, you don’t move to your right or your left at all. So, if I measure how much you move side to side as you stand straight up, I’ll get zero.

That’s exactly what you might expect for the vertical spin up electrons. Since they have vertically oriented spin up, analogous to standing straight up, they should not have any spin left or right horizontally, analogous to moving side to side.

Surprisingly, physicists have found that half of them are horizontally right and half are horizontally left. Now it doesn’t seem to make sense that a vertical spin up electron has left spin (-1) and right spin (+1) outcomes when measured horizontally, just as we expect no side-to-side movement when standing straight up.

But when you add up all the left (-1) and right (+1) spin outcomes you do get zero, as we expected in the horizontal direction when our spin state is vertical spin up. So, on average, it’s like having no side-to-side or horizontal movement when we stand straight up.

This 50-50 ratio over the binary (+1 and -1) outcomes is what physicists are talking about when they say that a vertical spin up electron is in a quantum superposition of horizontal spins left and right.

Entanglement from the relativity principle

According to quantum information theory, all of quantum mechanics, to include its quantum entangled states, is based on the qubit with its quantum superposition.

What my colleagues and I proposed is that this quantum superposition results from the relativity principle, which (again) states the laws of physics are the same for all observers with different orientations in space.

If the electron with a vertical spin in the up direction were to pass straight through the horizontal magnets as you might expect, it would have no spin horizontally. This would violate the relativity principle, which says the particle should have a spin regardless of whether it’s being measured in the horizontal or vertical direction.

Because an electron with a vertical spin in the up direction does have a spin when measured horizontally, quantum information theorists can say that the relativity principle is (ultimately) responsible for quantum entanglement.

And since there is no force used in this principle explanation, there are none of the “spooky actions at a distance” that Einstein derided.

With quantum entanglement’s technological implications for quantum computing firmly established, it’s nice to know that one big question about its origin may be answered with a highly regarded physics principle.The Conversation

About the Author:

William Mark Stuckey, Professor of Physics, Elizabethtown College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI pioneers want bots to replace human teachers – here’s why that’s unlikely

By Annette Vee, University of Pittsburgh 

OpenAI co-founder Andrej Karpathy envisions a world in which artificial intelligence bots can be made into subject matter experts that are “deeply passionate, great at teaching, infinitely patient and fluent in all of the world’s languages.” Through this vision, the bots would be available to “personally tutor all 8 billion of us on demand.”

The embodiment of that idea is his latest venture, Eureka Labs, which is merely the newest prominent example of how tech entrepreneurs are seeking to use AI to revolutionize education.

Karpathy believes AI can solve a long-standing challenge: the scarcity of good teachers who are also subject experts.

And he’s not alone. OpenAI CEO Sam Altman, Khan Academy CEO Sal Khan, venture capitalist Marc Andreessen and University of California, Berkeley computer scientist Stuart Russell also dream of bots becoming on-demand tutors, guidance counselors and perhaps even replacements for human teachers.

As a researcher focused on AI and other new writing technologies, I’ve seen many cases of high-tech “solutions” for teaching problems that fizzled. AI certainly may enhance aspects of education, but history shows that bots probably won’t be an effective substitute for humans. That’s because students have long shown resistance to machines, however sophisticated, and a natural preference to connect with and be inspired by fellow humans.

The costly challenge of teaching writing to the masses

As the director of the English Composition program at the University of Pittsburgh, I oversee instruction for some 7,000 students a year. Programs like mine have long wrestled with how to teach writing efficiently and effectively to so many people at once.

The best answer so far is to keep class sizes to no more than 15 students. Research shows that students learn writing better in smaller classes because they are more engaged.

Yet small classes require more instructors, and that can get expensive for school districts and colleges.

Resuscitating dead scholars

Enter AI. Imagine, Karpathy posits, that the great theoretical physicist Richard Feynman, who has been dead for over 35 years, could be brought back to life as a bot to tutor students.

For Karpathy, an ideal learning experience would be working through physics material “together with Feynman, who is there to guide you every step of the way.” Feynman, renowned for his accessible way of presenting theoretical physics, could work with an unlimited number of students at the same time.

In this vision, human teachers still design course materials, but they are supported by an AI teaching assistant. This teacher-AI team “could run an entire curriculum of courses on a common platform,” Karpathy wrote. “If we are successful, it will be easy for anyone to learn anything,” whether it be a lot of people learning about one subject, or one person learning about many subjects.

Other efforts to personalize learning fall short

Yet technologies for personal learning aren’t new. Exactly 100 years ago, at the 1924 meeting of the American Psychological Association, inventor Sidney Pressey unveiled an “automatic teacher” made out of typewriter parts that asked multiple-choice questions.

In the 1950s, the psychologist B. F. Skinner designed “teaching machines.” If a student answered a question correctly, the machine advanced to ask about the problem’s next step. If not, the student stayed on that step of the problem until they solved it.

In both cases, students received positive feedback for correct answers. This gave them confidence as well as skills in the subject. The problem was that students didn’t learn much – they also found these nonhuman approaches boring, education writer Audrey Watters documents in “Teaching Machines.”

More recently, the world of education saw the rise and fall of “massive open online courses,” or MOOCs. These classes, which delivered video and quizzes, were heralded by The New York Times and others for their promise of democratizing education. Again, students lost interest and logged off.

Other web-based efforts have popped up, including course platforms like Coursera and Outlier. But the same problem persists: There’s no genuine interactivity to keep students engaged. One of the latest casualties in online learning was 2U, which acquired leading MOOC company edX in 2021 and in July 2024 filed for bankruptcy restructuring to reduce its US$945 million debt load. The culprit: falling demand for services.

Now comes the proliferation of AI-fueled platforms. Khanmigo deploys AI tutors to, as Sal Khan writes in his latest book, “personalize and customize coaching, as well as adapt to an individual’s needs while hovering beside our learners as they work.”

The educational publisher Pearson, too, is integrating AI into its educational materials. More than 1,000 universities are adopting these materials for fall 2024.

AI in education isn’t just coming; it’s here. The question is how effective it will be.

Drawbacks in AI learning

Some tech leaders believe bots can customize teaching and replace human teachers and tutors, but they’re likely to face the same problem as these earlier attempts: Students may not like it.

There are important reasons why, too. Students are unlikely to be inspired and excited the way they can be by a live instructor. Students in crisis often turn to trusted adults like teachers and coaches for help. Would they do the same with a bot? And what would the bot do if they did? We don’t know yet.

A lack of data privacy and security can also be a deterrent. These platforms collect volumes of information on students and their academic performance that can be misused or sold. Legislation may try to prevent this, but some popular platforms are based in China, out of reach of U.S. law.

Finally, there are concerns even if AI tutors and teachers become popular. If a bot teaches millions of students at once, we may lose diversity of thought. Where does originality come from when everyone receives the same teachings, especially if “academic success” relies on regurgitating what the AI instructor says?

The idea of an AI tutor in every pocket sounds exciting. I would love to learn physics from Richard Feynman or writing from Maya Angelou or astronomy from Carl Sagan. But history reminds us to be cautious and keep a close eye on whether students are actually learning. The promises of personalized learning are no guarantee for positive results.The Conversation

About the Author:

Annette Vee, Associate Professor of English, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI supercharges data center energy use – straining the grid and slowing sustainability efforts

By Ayse Coskun, Boston University 

The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.

The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand.

The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant, site of the infamous disaster in 1979, that has been dormant since 2019.

Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide.

AI and the grid

Thanks to AI, the electrical grid – in many places already near its capacity or prone to stability challenges – is experiencing more pressure than before. There is also a substantial lag between computing growth and grid growth. Data centers take one to two years to build, while adding new power to the grid requires over four years.

As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.

AI is having a big impact on the electrical grid and, potentially, the climate.

Along with the need to add more power generation to sustain this growth, nearly all countries have decarbonization goals. This means they are striving to integrate more renewable energy sources into the grid. Renewables such as wind and solar are intermittent: The wind doesn’t always blow and the sun doesn’t always shine. The dearth of cheap, green and scalable energy storage means the grid faces an even bigger problem matching supply with demand.

Additional challenges to data center growth include increasing use of water cooling for efficiency, which strains limited fresh water sources. As a result, some communities are pushing back against new data center investments.

Better tech

There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available.

Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially, as the industry has hit the limits of chip technology scaling.

To continue improving efficiency, researchers are designing specialized hardware such as accelerators, new integration technologies such as 3D chips, and new chip cooling techniques.

Similarly, researchers are increasingly studying and developing data center cooling technologies. The Electric Power Research Institute report endorses new cooling methods, such as air-assisted liquid cooling and immersion cooling. While liquid cooling has already made its way into data centers, only a few new data centers have implemented the still-in-development immersion cooling.

Flexible future

A new way of building AI data centers is flexible computing, where the key idea is to compute more when electricity is cheaper, more available and greener, and less when it’s more expensive, scarce and polluting.

Data center operators can convert their facilities to be a flexible load on the grid. Academia and industry have provided early examples of data center demand response, where data centers regulate their power depending on power grid needs. For example, they can schedule certain computing tasks for off-peak hours.

Implementing broader and larger scale flexibility in power consumption requires innovation in hardware, software and grid-data center coordination. Especially for AI, there is much room to develop new strategies to tune data centers’ computational loads and therefore energy consumption. For example, data centers can scale back accuracy to reduce workloads when training AI models.

Realizing this vision requires better modeling and forecasting. Data centers can try to better understand and predict their loads and conditions. It’s also important to predict the grid load and growth.

The Electric Power Research Institute’s load forecasting initiative involves activities to help with grid planning and operations. Comprehensive monitoring and intelligent analytics – possibly relying on AI – for both data centers and the grid are essential for accurate forecasting.

On the edge

The U.S. is at a critical juncture with the explosive growth of AI. It is immensely difficult to integrate hundreds of megawatts of electricity demand into already strained grids. It might be time to rethink how the industry builds data centers.

One possibility is to sustainably build more edge data centers – smaller, widely distributed facilities – to bring computing to local communities. Edge data centers can also reliably add computing power to dense, urban regions without further stressing the grid. While these smaller centers currently make up 10% of data centers in the U.S., analysts project the market for smaller-scale edge data centers to grow by over 20% in the next five years.

Along with converting data centers into flexible and controllable loads, innovating in the edge data center space may make AI’s energy demands much more sustainable.

This article has been updated to correct an editing error about the date Three Mile Island’s Unit 1 nuclear reactor was shut down.The Conversation

About the Author:

Ayse Coskun, Professor of Electrical and Computer Engineering, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.