Archive for Programming – Page 2

Quantum computers are like kaleidoscopes – why unusual metaphors help illustrate science and technology

By Sorin Adam Matei, Purdue University 

Quantum computing is like Forrest Gump’s box of chocolates: You never know what you’re gonna get. Quantum phenomena – the behavior of matter and energy at the atomic and subatomic levels – are not definite, one thing or another. They are opaque clouds of possibility or, more precisely, probabilities. When someone observes a quantum system, it loses its quantum-ness and “collapses” into a definite state.

Quantum phenomena are mysterious and often counterintuitive. This makes quantum computing difficult to understand. People naturally reach for the familiar to attempt to explain the unfamiliar, and for quantum computing this usually means using traditional binary computing as a metaphor. But explaining quantum computing this way leads to major conceptual confusion, because at a base level the two are entirely different animals.

This problem highlights the often mistaken belief that common metaphors are more useful than exotic ones when explaining new technologies. Sometimes the opposite approach is more useful. The freshness of the metaphor should match the novelty of the discovery.

The uniqueness of quantum computers calls for an unusual metaphor. As a communications researcher who studies technology, I believe that quantum computers can be better understood as kaleidoscopes.

This image could give you a better grasp of how quantum computers work.
Crystal A Murray/Flickr, CC BY-NC-SA

Digital certainty vs. quantum probabilities

The gap between understanding classical and quantum computers is a wide chasm. Classical computers store and process information via transistors, which are electronic devices that take binary, deterministic states: one or zero, yes or no. Quantum computers, in contrast, handle information probabilistically at the atomic and subatomic levels.

Classical computers use the flow of electricity to sequentially open and close gates to record or manipulate information. Information flows through circuits, triggering actions through a series of switches that record information as ones and zeros. Using binary math, bits are the foundation of all things digital, from the apps on your phone to the account records at your bank and the Wi-Fi signals bouncing around your home.

In contrast, quantum computers use changes in the quantum states of atoms, ions, electrons or photons. Quantum computers link, or entangle, multiple quantum particles so that changes to one affect all the others. They then introduce interference patterns, like multiple stones tossed into a pond at the same time. Some waves combine to create higher peaks, while some waves and troughs combine to cancel each other out. Carefully calibrated interference patterns guide the quantum computer toward the solution of a problem.

Physicist Katie Mack explains quantum probability.

Achieving a quantum leap, conceptually

The term “bit” is a metaphor. The word suggests that during calculations, a computer can break up large values into tiny ones – bits of information – which electronic devices such as transistors can more easily process.

Using metaphors like this has a cost, though. They are not perfect. Metaphors are incomplete comparisons that transfer knowledge from something people know well to something they are working to understand. The bit metaphor ignores that the binary method does not deal with many types of different bits at once, as common sense might suggest. Instead, all bits are the same.

The smallest unit of a quantum computer is called the quantum bit, or qubit. But transferring the bit metaphor to quantum computing is even less adequate than using it for classical computing. Transferring a metaphor from one use to another blunts its effect.

The prevalent explanation of quantum computing is that while classical computers can store or process only a zero or one in a transistor or other computational unit, quantum computers supposedly store and handle both zero and one and other values in between at the same time through the process of superposition.

Superposition, however, does not store one or zero or any other number simultaneously. There is only an expectation that the values might be zero or one at the end of the computation. This quantum probability is the polar opposite of the binary method of storing information.

Driven by quantum science’s uncertainty principle, the probability that a qubit stores a one or zero is like Schroedinger’s cat, which can be either dead or alive, depending on when you observe it. But the two different values do not exist simultaneously during superposition. They exist only as probabilities, and an observer cannot determine when or how frequently those values existed before the observation ended the superposition.

Leaving behind these challenges to using traditional binary computing metaphors means embracing new metaphors to explain quantum computing.

Peering into kaleidoscopes

The kaleidoscope metaphor is particularly apt to explain quantum processes. Kaleidoscopes can create infinitely diverse yet orderly patterns using a limited number of colored glass beads, mirror-dividing walls and light. Rotating the kaleidoscope enhances the effect, generating an infinitely variable spectacle of fleeting colors and shapes.

The shapes not only change but can’t be reversed. If you turn the kaleidoscope in the opposite direction, the imagery will generally remain the same, but the exact composition of each shape or even their structures will vary as the beads randomly mingle with each other. In other words, while the beads, light and mirrors could replicate some patterns shown before, these are never absolutely the same.

If you don’t have a kaleidoscope handy, this video is a good substitute.

Using the kaleidoscope metaphor, the solution a quantum computer provides – the final pattern – depends on when you stop the computing process. Quantum computing isn’t about guessing the state of any given particle but using mathematical models of how the interaction among many particles in various states creates patterns, called quantum correlations.

Each final pattern is the answer to a problem posed to the quantum computer, and what you get in a quantum computing operation is a probability that a certain configuration will result.

New metaphors for new worlds

Metaphors make the unknown manageable, approachable and discoverable. Approximating the meaning of a surprising object or phenomenon by extending an existing metaphor is a method that is as old as calling the edge of an ax its “bit” and its flat end its “butt.” The two metaphors take something we understand from everyday life very well, applying it to a technology that needs a specialized explanation of what it does. Calling the cutting edge of an ax a “bit” suggestively indicates what it does, adding the nuance that it changes the object it is applied to. When an ax shapes or splits a piece of wood, it takes a “bite” from it.

Metaphors, however, do much more than provide convenient labels and explanations of new processes. The words people use to describe new concepts change over time, expanding and taking on a life of their own.

When encountering dramatically different ideas, technologies or scientific phenomena, it’s important to use fresh and striking terms as windows to open the mind and increase understanding. Scientists and engineers seeking to explain new concepts would do well to seek out originality and master metaphors – in other words, to think about words the way poets do.The Conversation

About the Author:

Sorin Adam Matei, Associate Dean for Research, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New database features 250 AI tools that can enhance social science research

By Megan Stubbs-Richardson, Mississippi State University; Devon Brenner, Mississippi State University; Lauren Etheredge, Mississippi State University, and MacKenzie Paul, Baylor University 

AI – or artificial intelligence – is often used as a way to summarize data and improve writing. But AI tools also represent a powerful and efficient way to analyze large amounts of text to search for patterns. In addition, AI tools can assist with developing research products that can be shared widely.

It’s with that in mind that we, as researchers in social science, developed a new database of AI tools for the field. In the database, we compiled information about each tool and documented whether it was useful for literature reviews, data collection and analyses, or research dissemination. We also provided information on the costs, logins and plug-in extensions available for each tool.

When asked about their perceptions of AI, many social scientists express caution or apprehension. In a sample of faculty and students from over 600 institutions, only 22% of university faculty reported that they regularly used AI tools.

From combing through lengthy transcripts or text-based data to writing literature reviews and sharing results, we believe AI can help social science researchers – such as those in psychology, sociology and communication – as well as others get the most out of their data and present it to a wider audience.

Analyze text using AI

Qualitative research often involves poring over transcripts or written language to identify themes and patterns. While this kind of research is powerful, it is also labor-intensive. The power of AI platforms to sift through large datasets not only saves researchers time, but it can also help them analyze data that couldn’t have been analyzed previously because of the size of the dataset.

Specifically, AI can assist social scientists by identifying potential themes or common topics in large, text-based data that scientists can interrogate using qualitative research methods. For example, AI can analyze 15 million social media posts to identify themes in how people coped with COVID-19. These themes can then give researchers insight into larger trends in the data, allowing us to refine criteria for a more in-depth, qualitative analysis.

AI tools can also be used to adapt language and scientists’ word choice in research designs. In particular, AI can reduce bias by improving the wording of questions in surveys or refining keywords used in social media data collection.

Identify gaps in knowledge

Another key task in research is to scan the field for previous work to identify gaps in knowledge. AI applications are built on systems that can synthesize text. This makes literature reviews – the section of a research paper that summarizes other research on the same topic – and writing processes more efficient.

Research shows that human feedback to AI, such as providing examples of simple logic, can significantly improve the tools’ ability to perform complex reasoning. With this in mind, we can continually revise our instructions to AI and refine its ability to pull relevant literature.

However, social scientists must be wary of fake sources – a big concern with generative AI. It is essential to verify any sources AI tools provide to ensure they come from peer-reviewed journals.

Share research findings

AI tools can quickly summarize research findings in a reader-friendly way by assisting with writing blogs, creating infographics and producing presentation slides and even images.

Our database contains AI tools that can also help scientists present their findings on social media. One tool worth highlighting is BlogTweet. This free AI tool allows users to copy and paste text from an article like this one to generate tweet threads and start conversations.

Be aware of the cost of AI tools

Two-thirds of the tools in the database cost money. While our primary objective was to identify the most useful tools for social scientists, we also sought to identify open-source tools and curated a list of 85 free tools that can support literature reviews, writing, data collection, analysis and visualization efforts.

12 best free AI tools for academic research and researchers.

In our analysis of the cost of AI tools, we also found that many offer “freemium” access to tools. This means you can explore a free version of the product. More advanced versions of the tool are available through the purchase of tokens or subscription plans.

For some tools, costs can be somewhat hidden or unexpected. For instance, a tool that seems open source on the surface may actually have rate limits, and users may find that they’ve run out of free questions to ask the AI.

The future of the database

Since the release of the Artificial Intelligence Applications for Social Science Research Database on Oct. 5, 2023, it has been downloaded over 400 times across 49 countries. In the database, we found 131 AI tools useful for literature reviews, summaries or writing. As many as 146 AI tools are useful for data collection or analysis, and 108 are useful for research dissemination.

We continue to update the database and hope that it can aid academic communities in their exploration of AI and generate new conversations. The more that social scientists use the database, the more they can work toward consensus of adopting ethical approaches to using AI in research and analysis.The Conversation

About the Authors:

Megan Stubbs-Richardson, Assistant Research Professor at the Social Science Research Center, Mississippi State University; Devon Brenner, Professor of education, Mississippi State University; Lauren Etheredge, Research associate in sociology, Mississippi State University, and MacKenzie Paul, Doctoral student in psychology, Baylor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI plus gene editing promises to shift biotech into high gear

By Marc Zimmer, Connecticut College 

During her chemistry Nobel Prize lecture in 2018, Frances Arnold said, “Today we can for all practical purposes read, write and edit any sequence of DNA, but we cannot compose it.” That isn’t true anymore.

Since then, science and technology have progressed so much that artificial intelligence has learned to compose DNA, and with genetically modified bacteria, scientists are on their way to designing and making bespoke proteins.

The goal is that with AI’s designing talents and gene editing’s engineering abilities, scientists can modify bacteria to act as mini factories producing new proteins that can reduce greenhouse gases, digest plastics or act as species-specific pesticides.

As a chemistry professor and computational chemist who studies molecular science and environmental chemistry, I believe that advances in AI and gene editing make this a realistic possibility.

Gene sequencing – reading life’s recipes

All living things contain genetic materials – DNA and RNA – that provide the hereditary information needed to replicate themselves and make proteins. Proteins constitute 75% of human dry weight. They make up muscles, enzymes, hormones, blood, hair and cartilage. Understanding proteins means understanding much of biology. The order of nucleotide bases in DNA, or RNA in some viruses, encodes this information, and genomic sequencing technologies identify the order of these bases.

The Human Genome Project was an international effort that sequenced the entire human genome from 1990 to 2003. Thanks to rapidly improving technologies, it took seven years to sequence the first 1% of the genome and another seven years for the remaining 99%. By 2003, scientists had the complete sequence of the 3 billion nucleotide base pairs coding for 20,000 to 25,000 genes in the human genome.

However, understanding the functions of most proteins and correcting their malfunctions remained a challenge.

AI learns proteins

Each protein’s shape is critical to its function and is determined by the sequence of its amino acids, which is in turn determined by the gene’s nucleotide sequence. Misfolded proteins have the wrong shape and can cause illnesses such as neurodegenerative diseases, cystic fibrosis and Type 2 diabetes. Understanding these diseases and developing treatments requires knowledge of protein shapes.

Before 2016, the only way to determine the shape of a protein was through X-ray crystallography, a laboratory technique that uses the diffraction of X-rays by single crystals to determine the precise arrangement of atoms and molecules in three dimensions in a molecule. At that time, the structure of about 200,000 proteins had been determined by crystallography, costing billions of dollars.

AlphaFold, a machine learning program, used these crystal structures as a training set to determine the shape of the proteins from their nucleotide sequences. And in less than a year, the program calculated the protein structures of all 214 million genes that have been sequenced and published. The protein structures AlphaFold determined have all been released in a freely available database.

To effectively address noninfectious diseases and design new drugs, scientists need more detailed knowledge of how proteins, especially enzymes, bind small molecules. Enzymes are protein catalysts that enable and regulate biochemical reactions.

AI system AlphaFold3 allows scientists to make intricately detailed models of life’s molecular machinery.

AlphaFold3, released May 8, 2024, can predict protein shapes and the locations where small molecules can bind to these proteins. In rational drug design, drugs are designed to bind proteins involved in a pathway related to the disease being treated. The small molecule drugs bind to the protein binding site and modulate its activity, thereby influencing the disease path. By being able to predict protein binding sites, AlphaFold3 will enhance researchers’ drug development capabilities.

AI + CRISPR = composing new proteins

Around 2015, the development of CRISPR technology revolutionized gene editing. CRISPR can be used to find a specific part of a gene, change or delete it, make the cell express more or less of its gene product, or even add an utterly foreign gene in its place.

In 2020, Jennifer Doudna and Emmanuelle Charpentier received the Nobel Prize in chemistry “for the development of a method (CRISPR) for genome editing.” With CRISPR, gene editing, which once took years and was species specific, costly and laborious, can now be done in days and for a fraction of the cost.

AI and genetic engineering are advancing rapidly. What was once complicated and expensive is now routine. Looking ahead, the dream is of bespoke proteins designed and produced by a combination of machine learning and CRISPR-modified bacteria. AI would design the proteins, and bacteria altered using CRISPR would produce the proteins. Enzymes produced this way could potentially breathe in carbon dioxide and methane while exhaling organic feedstocks, or break down plastics into substitutes for concrete.

I believe that these ambitions are not unrealistic, given that genetically modified organisms already account for 2% of the U.S. economy in agriculture and pharmaceuticals.

Two groups have made functioning enzymes from scratch that were designed by differing AI systems. David Baker’s Institute for Protein Design at the University of Washington devised a new deep-learning-based protein design strategy it named “family-wide hallucination,” which they used to make a unique light-emitting enzyme. Meanwhile, biotech startup Profluent, has used an AI trained from the sum of all CRISPR-Cas knowledge to design new functioning genome editors.

If AI can learn to make new CRISPR systems as well as bioluminescent enzymes that work and have never been seen on Earth, there is hope that pairing CRISPR with AI can be used to design other new bespoke enzymes. Although the CRISPR-AI combination is still in its infancy, once it matures it is likely to be highly beneficial and could even help the world tackle climate change.

It’s important to remember, however, that the more powerful a technology is, the greater the risks it poses. Also, humans have not been very successful at engineering nature due to the complexity and interconnectedness of natural systems, which often leads to unintended consequences.The Conversation

About the Author:

Marc Zimmer, Professor of Chemistry, Connecticut College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Cybersecurity researchers spotlight a new ransomware threat – be careful where you upload files

By Selcuk Uluagac, Florida International University 

You probably know better than to click on links that download unknown files onto your computer. It turns out that uploading files can get you into trouble, too.

Today’s web browsers are much more powerful than earlier generations of browsers. They’re able to manipulate data within both the browser and the computer’s local file system. Users can send and receive email, listen to music or watch a movie within a browser with the click of a button.

Unfortunately, these capabilities also mean that hackers can find clever ways to abuse the browsers to trick you into letting ransomware lock up your files when you think that you’re simply doing your usual tasks online.

I’m a computer scientist who studies cybersecurity. My colleagues and I have shown how hackers can gain access to your computer’s files via the File System Access Application Programming Interface (API), which enables web applications in modern browsers to interact with the users’ local file systems.

The threat applies to Google’s Chrome and Microsoft’s Edge browsers but not Apple’s Safari or Mozilla’s Firefox. Chrome accounts for 65% of browsers used, and Edge accounts for 5%. To the best of my knowledge, there have been no reports of hackers using this method so far.

My colleagues, who include a Google security researcher, and I have communicated with the developers responsible for the File System Access API, and they have expressed support for our work and interest in our approaches to defending against this kind of attack. We also filed a security report to Microsoft but have not heard from them.

Double-edged sword

Today’s browsers are almost operating systems unto themselves. They can run software programs and encrypt files. These capabilities, combined with the browser’s access to the host computer’s files – including ones in the cloud, shared folders and external drives – via the File System Access API creates a new opportunity for ransomware.

Imagine you want to edit photos on a benign-looking free online photo editing tool. When you upload the photos for editing, any hackers who control the malicious editing tool can access the files on your computer via your browser. The hackers would gain access to the folder you are uploading from and all subfolders. Then the hackers could encrypt the files in your file system and demand a ransom payment to decrypt them.

Today’s web browsers are more powerful – and in some ways more vulnerable – than their predecessors.

Ransomware is a growing problem. Attacks have hit individuals as well as organizations, including Fortune 500 companies, banks, cloud service providers, cruise operators, threat-monitoring services, chip manufacturers, governments, medical centers and hospitals, insurance companies, schools, universities and even police departments. In 2023, organizations paid more than US$1.1 billion in ransomware payments to attackers, and 19 ransomware attacks targeted organizations every second.

It is no wonder ransomware is the No. 1 arms race today between hackers and security specialists. Traditional ransomware runs on your computer after hackers have tricked you into downloading it.

New defenses for a new threat

A team of researchers I lead at the Cyber-Physical Systems Security Lab at Florida International University, including postdoctoral researcher Abbas Acar and Ph.D. candidate Harun Oz, in collaboration with Google Senior Research Scientist Güliz Seray Tuncay, have been investigating this new type of potential ransomware for the past two years. Specifically, we have been exploring how powerful modern web browsers have become and how they can be weaponized by hackers to create novel forms of ransomware.

In our paper, RøB: Ransomware over Modern Web Browsers, which was presented at the USENIX Security Symposium in August 2023, we showed how this emerging ransomware strain is easy to design and how damaging it can be. In particular, we designed and implemented the first browser-based ransomware called RøB and analyzed its use with browsers running on three different major operating systems – Windows, Linux and MacOS – five cloud providers and five antivirus products.

Our evaluations showed that RøB is capable of encrypting numerous types of files. Because RøB runs within the browser, there are no malicious payloads for a traditional antivirus program to catch. This means existing ransomware detection systems face several issues against this powerful browser-based ransomware.

We proposed three different defense approaches to mitigate this new ransomware type. These approaches operate at different levels – browser, file system and user – and complement one another.

The first approach temporarily halts a web application – a program that runs in the browser – in order to detect encrypted user files. The second approach monitors the activity of the web application on the user’s computer to identify ransomware-like patterns. The third approach introduces a new permission dialog box to inform users about the risks and implications associated with allowing web applications to access their computer’s file system.

When it comes to protecting your computer, be careful about where you upload as well as download files. Your uploads could be giving hackers an “in” to your computer.The Conversation

About the Author:

Selcuk Uluagac, Professor of Computing and Information Science, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Are tomorrow’s engineers ready to face AI’s ethical challenges?

By Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan 

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into daily life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What’s more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I’m supposed to go to?”

Another was troubled by the lack of training: “I [would be] dealing with that with no experience. … Who knows how I’ll react.”

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do receive. Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off’

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students’ understanding, but report feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research project share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

“If I’m paying money to attend ethics class as an engineer, I’m going to be furious,” one said.

These attitudes sometimes extend to how students view engineers’ role in society. One interviewee in our current study, for example, said that an engineer’s “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. Following them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. Compared to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public’s first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

About the Author:

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

How AI and a popular card game can help engineers predict catastrophic failure – by finding the absence of a pattern

By John Edward McCarthy, Arts & Sciences at Washington University in St. Louis 

Humans are very good at spotting patterns, or repeating features people can recognize. For instance, ancient Polynesians navigated across the Pacific by recognizing many patterns, from the stars’ constellations to more subtle ones such as the directions and sizes of ocean swells.

Very recently, mathematicians like me have started to study large collections of objects that have no patterns of a particular sort. How large can collections be before a specified pattern has to appear somewhere in the collection? Understanding such scenarios can have significant real-world implications: For example, what’s the smallest number of server failures that would lead to the severing of the internet?

Research from mathematician Jordan Ellenberg at the University of Wisconsin and researchers at Google’s Deep Mind have proposed a novel approach to this problem. Their work uses artificial intelligence to find large collections that don’t contain a specified pattern, which can help us understand some worst-case scenarios.

Can you find a matching set?
Cmglee/Wikimedia Commons, CC BY-SA

Patterns in the card game Set

The idea of patternless collections can be illustrated by a popular card game called Set. In this game, players lay out 12 cards, face up. Each card has a different simple picture on it. They vary in terms of number, color, shape and shading. Each of these four features can have one of three values.

Players race to look for “sets,” which are groups of three cards in which every feature is either the same or different in each card. For instance, cards with one solid red diamond, two solid green diamonds and three solid purple diamonds form a set: All three have different numbers (one, two, three), the same shading (solid), different colors (red, green, purple) and the same shape (diamond).

Marsha Falco originally created the game Set to help explain her research on population genetics.

Finding a set is usually possible – but not always. If none of the players can find a set from the 12 cards on the table, then they flip over three more cards. But they still might not be able to find a set in these 15 cards. The players continue to flip over cards, three at a time, until someone spots a set.

So what is the maximum number of cards you can lay out without forming a set?

In 1971, mathematician Giuseppe Pellegrino showed that the largest collection of cards without a set is 20. But if you chose 20 cards at random, “no set” would happen only about one in a trillion times. And finding these “no set” collections is an extremely hard problem to solve.

Finding ‘no set’ with AI

If you wanted to find the smallest collection of cards with no set, you could in principle do an exhaustive search of every possible collection of cards chosen from the deck of 81 cards. But there are an enormous number of possibilities – on the order of 1024 (that’s a “1” followed by 24 zeros). And if you increase the number of features of the cards from four to, say, eight, the complexity of the problem would overwhelm any computer doing an exhaustive search for “no set” collections.

Mathematicians love to think about computationally difficult problems like this. These complex problems, if approached in the right way, can become tractable.

It’s easier to find best-case scenarios – here, that would mean the fewest number of cards that could contain a set. But there were few known strategies that could explore bad scenarios – here, that would mean a large collection of cards that do not contain a set.

Ellenberg and his collaborators approached the bad scenario with a type of AI called large language models, or LLMs. The researchers first wrote computer programs that generate some examples of collections of many that contain no set. These collections typically have “cards” with more than four features.

Then they fed these programs to the LLM, which soon learned how to write many similar programs and choose the ones that give rise to the largest set-free collections to undergo the process again. Iterating that process by repeatedly tweaking the most successful programs enables them to find larger and larger set-free collections.

Square of nine circles, four of which are colored blue, connected by grey, red, green, and yellow lines
This is another version of a ‘no set,’ where no three components of a set are linked by a line.
Romera-Peredes et al./Nature, CC BY-SA

This method allows people to explore disordered collections – in this instance, collections of cards that contain no set – in an entirely new way. It does not guarantee that researchers will find the absolute worst-case scenario, but they will find scenarios that are much worse than a random generation would yield.

Their work can help researchers understand how events might align in a way that leads to catastrophic failure.

For example, how vulnerable is the electrical grid to a malicious attacker who destroys select substations? Suppose that a bad collection of substations is one where they don’t form a connected grid. The worst-case scenario is now a very large number of substations that, when taken all together, still don’t yield a connected grid. The amount of substations excluded from this collection make up the smallest number a malicious actor needs to destroy to deliberately disconnect the grid.

The work of Ellenberg and his collaborators demonstrates yet another way that AI is a very powerful tool. But to solve very complex problems, at least for now, it still needs human ingenuity to guide it.The Conversation

John Edward McCarthy, Professor of Mathematics, Arts & Sciences at Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Building fairness into AI is crucial – and hard to get right

By Ferdinando Fioretto, University of Virginia 

Artificial intelligence’s capacity to process and analyze vast amounts of data has revolutionized decision-making processes, making operations in health care, finance, criminal justice and other sectors of society more efficient and, in many instances, more effective.

With this transformative power, however, comes a significant responsibility: the need to ensure that these technologies are developed and deployed in a manner that is equitable and just. In short, AI needs to be fair.

The pursuit of fairness in AI is not merely an ethical imperative but a requirement in order to foster trust, inclusivity and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. And on top of that, my research as a computer scientist who studies AI shows that attempts to ensure fairness in AI can have unintended consequences.

Why fairness in AI matters

Fairness in AI has emerged as a critical area of focus for researchers, developers and policymakers. It transcends technical achievement, touching on ethical, social and legal dimensions of the technology.

Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives – for example, hiring algorithms – are made equitably. Socially, AI systems that embody fairness can help address and mitigate historical biases – for example, those against women and minorities – fostering inclusivity. Legally, embedding fairness in AI systems helps bring those systems into alignment with anti-discrimination laws and regulations around the world.

Unfairness can stem from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms processing data that reflects societal prejudices or lacks diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision-makers or those already in an organization. When biased data is then used to train a machine learning algorithm to aid a decision-maker, the algorithm can propagate and even amplify these biases.

Why fairness in AI is hard

Fairness is inherently subjective, influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers and policymakers often translate fairness to the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.

However, measuring fairness and building it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic parity, equality of opportunity and individual fairness.

Why the concept of algorithmic fairness is so challenging.

These definitions involve different mathematical formulations and underlying philosophies. They also often conflict, highlighting the difficulty of satisfying all fairness criteria simultaneously in practice.

In addition, fairness cannot be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment and impact.

Unintended effects on fairness

The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phases to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are seldom deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many constraints, including security and privacy.

Research my colleagues and I conducted shows that constraints such as computational resources, hardware types and privacy can significantly influence the fairness of AI systems. For instance, the need for computational efficiency can lead to simplifications that inadvertently overlook or misrepresent marginalized groups.

In our study on network pruning – a method to make complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This happens because the pruning might not consider how different groups are represented in the data and by the model, leading to biased outcomes.

Similarly, privacy-preserving techniques, while crucial, can obscure the data necessary to identify and mitigate biases or disproportionally affect the outcomes for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to unfair resource allocation because the added noise affects some groups more than others. This disproportionality can also skew decision-making processes that rely on this data, such as resource allocation for public services.

These constraints do not operate in isolation but intersect in ways that compound their impact on fairness. For instance, when privacy measures exacerbate biases in data, it can further amplify existing inequalities. This makes it important to have a comprehensive understanding and approach to both privacy and fairness for AI development.

The path forward

Making AI fair is not straightforward, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Given that bias is pervasive in society, I believe that people working in the AI field should recognize that it’s not possible to achieve perfect fairness and instead strive for continuous improvement.

This challenge requires a commitment to rigorous research, thoughtful policymaking and ethical practice. To make it work, researchers, developers and users of AI will need to ensure that considerations of fairness are woven into all aspects of the AI pipeline, from its conception through data collection and algorithm design to deployment and beyond.The Conversation

About the Author:

Ferdinando Fioretto, Assistant Professor of Computer Science, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Bringing AI up to speed – autonomous auto racing promises safer driverless cars on the road

By Madhur Behl, University of Virginia 

The excitement of auto racing comes from split-second decisions and daring passes by fearless drivers. Imagine that scene, but without the driver – the car alone, guided by the invisible hand of artificial intelligence. Can the rush of racing unfold without a driver steering the course? It turns out that it can.

Enter autonomous racing, a field that’s not just about high-speed competition but also pushing the boundaries of what autonomous vehicles can achieve and improving their safety.

Over a century ago, at the dawn of automobiles, as society shifted from horse-drawn to motor-powered vehicles, there was public doubt about the safety and reliability of the new technology. Motorsport racing was organized to showcase the technological performance and safety of these horseless carriages. Similarly, autonomous racing is the modern arena to prove the reliability of autonomous vehicle technology as driverless cars begin to hit the streets.

Autonomous racing’s high-speed trials mirror the real-world challenges that autonomous vehicles face on streets: adjusting to unexpected changes and reacting in fractions of a second. Mastering these challenges on the track, where speeds are higher and reaction times shorter, leads to safer autonomous vehicles on the road.

Autonomous race cars pass, or ‘overtake,’ others on the Las Vegas Motor Speedway track.

I am a computer science professor who studies artificial intelligence, robotics and autonomous vehicles, and I lead the Cavalier Autonomous Racing team at the University of Virginia. The team competes in the Indy Autonomous Challenge, a global contest where universities pit fully autonomous Indy race cars against each other. Since its 2021 inception, the event has drawn top international teams to prestigious circuits like the Indianapolis Motor Speedway. The field, marked by both rivalry and teamwork, shows that collective problem-solving drives advances in autonomous vehicle safety.

At the Indy Autonomous Challenge passing competition held at the 2024 Consumer Electronics Show in Las Vegas in January 2024, our Cavalier team clinched second place and hit speeds of 143 mph (230 kilometers per hour) while autonomously overtaking another race car, affirming its status as a leading American team. TUM Autonomous Motorsport from the Technical University of Munich won the event.

An autonomous race car built by the Technical University of Munich prepares to pass the University of Virginia’s entrant.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

Pint-size beginnings

The field of autonomous racing didn’t begin with race cars on professional race tracks but with miniature cars at robotics conferences. In 2015, my colleagues and I engineered a 1/10 scale autonomous race car. We transformed a remote-controlled car into a small but powerful research and educational tool, which I named F1tenth, playing on the name of the traditional Formula One, or F1, race car. The F1tenth platform is now used by over 70 institutions worldwide to construct their miniaturized autonomous racers.

The F1tenth Autonomous Racing Grand Prix is now a marquee event at robotics conferences where teams from across the planet gather, each wielding vehicles that are identical in hardware and sensors, to engage in what is essentially an intense “battle of algorithms.” Victory on the track is claimed not by raw power but by the advanced AI algorithms’ control of the cars.

These race cars are small, but the challenges to autonomous driving are sizable.

F1tenth has also emerged as an engaging and accessible gateway for students to delve into robotics research. Over the years, I’ve reached thousands of students via my courses and online lecture series, which explains the process of how to build, drive and autonomously race these vehicles.

Getting real

Today, the scope of our research has expanded significantly, advancing from small-scale models to actual autonomous Indy cars that compete at speeds of upward of 150 mph (241 kph), executing complex overtaking maneuvers with other autonomous vehicles on the racetrack. The cars are built on a modified version of the Indy NXT chassis and are outfitted with sensors and controllers to allow autonomous driving. Indy NXT race cars are used in professional racing and are slightly smaller versions of the Indy cars made famous by the Indianapolis 500.

13 people stand beside a race car in a large empty racing stadium
The Cavalier Autonomous Racing team stands behind their driverless race car.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

The gritty reality of racing these advanced machines on real racetracks pushes the boundaries of what autonomous vehicles can do. Autonomous racing takes the challenges of robotics and AI to new levels, requiring researchers to refine our understanding of how machines perceive their environment, make safe decisions and control complex maneuvers at a high speed where traditional methods begin to falter.

Precision is critical, and the margin for error in steering and acceleration is razor-thin, requiring a sophisticated grasp and exact mathematical description of the car’s movement, aerodynamics and drivetrain system. In addition, autonomous racing researchers create algorithms that use data from cameras, radar and lidar, which is like radar but with lasers instead of radio waves, to steer around competitors and safely navigate the high-speed and unpredictable racing environment.

My team has shared the world’s first open dataset for autonomous racing, inviting researchers everywhere to join in refining the algorithms that could help define the future of autonomous vehicles.

The data from the competitions is available for other researchers to use.

Crucible for autonomous vehicles

More than just a technological showcase, autonomous racing is a critical research frontier. When autonomous systems can reliably function in these extreme conditions, they inherently possess a buffer when operating in the ordinary conditions of street traffic.

Autonomous racing is a testbed where competition spurs innovation, collaboration fosters growth, and AI-controlled cars racing to the finish line chart a course toward safer autonomous vehicles.The Conversation

About the Author:

Madhur Behl, Associate Professor of Robotics and Artificial Intelligence, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why AI can’t replace air traffic controllers

By Amy Pritchett, Penn State 

After hours of routine operations, an air traffic controller gets a radio call from a small aircraft whose cockpit indicators can’t confirm that the plane’s landing gear is extended for landing. The controller arranges for the pilot to fly low by the tower so the controller can visually check the plane’s landing gear. All appears well. “It looks like your gear is down,” the controller tells the pilot.

The controller calls for the airport fire trucks to be ready just in case, and the aircraft circles back to land safely. Scenarios like this play out regularly. In the air traffic control system, everything must meet the highest levels of safety, but not everything goes according to plan.

Contrast this with the still science-fiction vision of future artificial intelligence “pilots” flying autonomous aircraft, complete with an autonomous air traffic control system handling aircraft as easily as routers shuttling data packets on the internet.

I’m an aerospace engineer who led a National Academies study ordered by Congress about air traffic controller staffing. Researchers are continually working on new technologies that automate elements of the air traffic control system, but technology can execute only those functions that are planned for during its design and so can’t modify standard procedures. As the scenario above illustrates, humans are likely to remain a necessary central component of air traffic control for a long time to come.

What air traffic controllers do

The Federal Aviation Administration’s fundamental guidance for the responsibility of air traffic controllers states: “The primary purpose of the air traffic control system is to prevent a collision involving aircraft.” Air traffic controllers are also charged with providing “a safe, orderly and expeditious flow of air traffic” and other services supporting safety, such as helping pilots avoid mountains and other hazardous terrain and hazardous weather, to the extent they can.

Air traffic controllers’ jobs vary. Tower controllers provide the local control that clears aircraft to take off and land, making sure that they are spaced safely apart. They also provide ground control, directing aircraft to taxi and notifying pilots of flight plans and potential safety concerns on that day before flight. Tower controllers are aided by some displays but mostly look outside from the towers and talk with pilots via radio. At larger airports staffed by FAA controllers, surface surveillance displays show controllers the aircraft and other vehicles on the ground on the airfield.

This FAA animation explains the three basic components of the U.S. air traffic control system.

Approach and en route controllers, on the other hand, sit in front of large displays in dark and quiet rooms. They communicate with pilots via radio. Their displays show aircraft locations on a map view with key features of the airspace boundaries and routes.

The 21 en route control centers in the U.S. manage traffic that is between and above airports and thus typically flying at higher speeds and altitudes.

Controllers at approach control facilities transition departing aircraft from local control after takeoff up and into en route airspace. They similarly take arriving aircraft from en route airspace, line them up with the landing approach and hand them off to tower controllers.

A controller at each display manages all the traffic within a sector. Sectors can vary in size from a few cubic miles, focused on sequencing aircraft landing at a busy airport, to en route sectors spanning more than 30,000 cubic miles (125,045 cubic km) where and when there are few aircraft flying. If a sector gets busy, a second and even third controller might assist, or the sector might be split into two, with another display and controller team managing the second.

How technology can help

Air traffic controllers have a stressful job and are subject to fatigue and information overload. Public concern about a growing number of close calls have put a spotlight on aging technology and staffing shortages that have led to air traffic controllers working mandatory overtime. New technologies can help alleviate those issues.

The air traffic control system is incorporating new technologies in several ways. The FAA’s NextGen air transportation system initiative is providing controllers with more – and more accurate – information.

Controllers’ displays originally showed only radar tracking. They now can tap into all the data known about each flight within the en route automation modernization system. This system integrates radar, automatic position reports from aircraft via automatic dependent surveillance-broadcast, weather reports, flight plans and flight histories.

Systems help alert controllers to potential conflicts between aircraft, or aircraft that are too close to high ground or structures, and provide suggestions to controllers to sequence aircraft into smooth traffic flows. In testimony to the U.S. Senate on Nov. 9, 2023, about airport safety, FAA Chief Operating Officer Timothy Arel said that the administration is developing or improving several air traffic control systems.

Researchers are using machine learning to analyze and predict aspects of air traffic and air traffic control, including air traffic flow between cities and air traffic controller behavior.

How technology can complicate matters

New technology can also cause profound changes to air traffic control in the form of new types of aircraft. For example, current regulations mostly limit uncrewed aircraft to fly lower than 400 feet (122 meters) above ground and away from airports. These are drones used by first responders, news organizations, surveyors, delivery services and hobbyists.

NASA and the FAA are leading the development of a traffic control system for drones and other uncrewed aircraft.

However, some emerging uncrewed aircraft companies are proposing to fly in controlled airspace. Some plan to have their aircraft fly regular flight routes and interact normally with air traffic controllers via voice radio. These include Reliable Robotics and Xwing, which are separately working to automate the Cessna Caravan, a small cargo airplane.

Others are targeting new business models, such as advanced air mobility, the concept of small, highly automated electric aircraft – electric air taxis, for example. These would require dramatically different routes and procedures for handling air traffic.

Expect the unexpected

An air traffic controller’s routine can be disrupted by an aircraft that requires special handling. This could range from an emergency to priority handling of medical flights or Air Force One. Controllers are given the responsibility and the flexibility to adapt how they manage their airspace.

The requirements for the front line of air traffic control are a poor match for AI’s capabilities. People expect air traffic to continue to be the safest complex, high-technology system ever. It achieves this standard by adhering to procedures when practical, which is something AI can do, and by adapting and exercising good judgment whenever something unplanned occurs or a new operation is implemented – a notable weakness of today’s AI.

Indeed, it is when conditions are the worst – when controllers figure out how to handle aircraft with severe problems, airport crises or widespread airspace closures due to security concerns or infrastructure failures – that controllers’ contributions to safety are the greatest.

Also, controllers don’t fly the aircraft. They communicate and interact with others to guide the aircraft, and so their responsibility is fundamentally to serve as part of a team – another notable weakness of AI.

As an engineer and designer, I’m most excited about the potential for AI to analyze the big data records of past air traffic operations in pursuit of, for example, more efficient routes of flight. However, as a pilot, I’m glad to hear a controller’s calm voice on the radio helping me land quickly and safely should I have a problem.The Conversation

About the Author:

Amy Pritchett, Professor of Aerospace Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Combining two types of molecular boron nitride could create a hybrid material used in faster, more powerful electronics

By Pulickel Ajayan, Rice University and Abhijit Biswas, Rice University 

In chemistry, structure is everything. Compounds with the same chemical formula can have different properties depending on the arrangement of the molecules they’re made of. And compounds with a different chemical formula but a similar molecular arrangement can have similar properties.

Graphene and a form of boron nitride called hexagonal boron nitride fall into the latter group. Graphene is made up of carbon atoms. Boron nitride, BN, is composed of boron and nitrogen atoms. While their chemical formulas differ, they have a similar structure – so similar that many chemists call hexagonal boron nitride “white graphene.”

Carbon-based graphene has lots of useful properties. It’s thin but strong, and it conducts heat and electricity very well, making it ideal for use in electronics.

Similarly, hexagonal boron nitride has a host of properties similar to graphene that could improve biomedical imaging and drug delivery, as well as computers, smartphones and LEDs. Researchers have studied this type of boron nitride for many years.

But, hexagonal boron nitride isn’t the only useful form this compound comes in.

As materials engineers, our research team has been investigating another type of boron nitride called cubic boron nitride. We want to know if combining the properties of hexagonal boron nitride with cubic boron nitride could open the door to even more useful applications.

Molecular structures of molecules, with atoms represented as blue spheres and bonds represented by gray lines connecting them. The left structure is in the shape of the cube, the right in flat sheets of hexagons.
Cubic boron nitride, shown on the left, and hexagonal boron nitride, shown on the right.
Oddball/Wikimedia Commons, CC BY-NC-SA

Hexagonal versus cubic

Hexagonal boron nitride is, as you might guess, boron nitride molecules arranged in the shape of a flat hexagon. It looks honeycomb-shaped, like graphene. Cubic boron nitride has a three-dimensional lattice structure and looks like a diamond at the molecular level.

H-BN is thin, soft and used in cosmetics to give them a silky texture. It doesn’t melt or degrade even under extreme heat, which also makes it useful in electronics and other applications. Some scientists predict it could be used to build a radiation shield for spacecraft.

C-BN is hard and resistant. It’s used in manufacturing to make cutting tools and drills, and it can keep its sharp edge even at high temperatures. It can also help dissipate heat in electronics.

Even though h-BN and c-BN might seem different, when put together, our research has found they hold even more potential than either on its own.

Two white powders, the top labeled 'hexagonal boron nitride' and the bottom labeled 'cubic boron nitride' with a circle between them labeled 'mixed phase boron nitride.' The bottom powder is slightly more brown and more clumpy.
The two forms of boron nitride have some similarities and some differences, but when combined, they can create a substance with a variety of scientific applications.
Abhijit Biswas

Both types of boron nitride conduct heat and can provide electrical insulation, but one, h-BN, is soft, and the other, c-BN, is hard. So, we wanted to see if they could be used together to create materials with interesting properties.

For example, combining their different behaviors could make a coating material effective for high temperature structural applications. C-BN could provide strong adhesion to a surface, while h-BN’s lubricating properties could resist wear and tear. Both together would keep the material from overheating.

Making boron nitride

This class of materials doesn’t occur naturally, so scientists must make it in the lab. In general, high-quality c-BN has been difficult to synthesize, whereas h-BN is relatively easier to make as high-quality films, using what are called vapor phase deposition methods.

In vapor phase deposition, we heat up boron and nitrogen-containing materials until they evaporate. The evaporated molecules then get deposited onto a surface, cool down, bond together and form a thin film of BN.

Our research team has worked on combining h-BN and c-BN using similar processes to vapor phase deposition, but we can also mix powders of the two together. The idea is to build a material with the right mix of h-BN and c-BN for thermal, mechanical and electronic properties that we can fine-tune.

Our team has found the composite substance made from combining both forms of BN together has a variety of potential applications. When you point a laser beam at the substance, it flashes brightly. Researchers could use this property to create display screens and improve radiation therapies in the medical field.

We’ve also found we can tailor how heat-conductive the composite material is. This means engineers could use this BN composite in machines that manage heat. The next step is trying to manufacture large plates made of a h-BN and c-BN composite. If done precisely, we can tailor the mechanical, thermal and optical properties to specific applications.

In electronics, h-BN could act as a dielectric – or insulator – alongside graphene in certain, low-power electronics. As a dielectric, h-BN would help electronics operate efficiently and keep their charge.

C-BN could work alongside diamond to create ultrawide band gap materials that allow electronic devices to work at a much higher power. Diamond and c-BN both conduct heat well, and together they could help cool down these high-power devices, which generate lots of extra heat.

H-BN and c-BN separately could lead to electronics that perform exceptionally well in different contexts – together, they have a host of potential applications, as well.

Our BN composite could improve heat spreaders and insulators, and it could work in energy storage machines like supercapacitors, which are fast-charging energy storage devices, and rechargeable batteries.

We’ll continue studying BN’s properties, and how we can use it in lubricants, coatings and wear-resistant surfaces. Developing ways to scale up production will be key for exploring its applications, from materials science to electronics and even environmental science.The Conversation

About the Author:

Pulickel Ajayan, Professor of Materials Science and NanoEngineering, Rice University and Abhijit Biswas, Research Scientist in Materials Science and Nanoengineering, Rice University

This article is republished from The Conversation under a Creative Commons license. Read the original article.