Archive for Programming

Tiny robots and AI algorithms could help to craft material solutions for cleaner environments

By Mahshid Ahmadi, University of Tennessee 

Many human activities release pollutants into the air, water and soil. These harmful chemicals threaten the health of both people and the ecosystem. According to the World Health Organization, air pollution causes an estimated 4.2 million deaths annually.

Scientists are looking into solutions, and one potential avenue is a class of materials called photocatalysts. When triggered by light, these materials undergo chemical reactions that initial studies have shown can break down common toxic pollutants.

I am a materials science and engineering researcher at the University of Tennessee. With the help of robots and artificial intelligence, my colleagues and I are making and testing new photocatalysts with the goal of mitigating air pollution.

Breaking down pollutants

The photocatalysts work by generating charged carriers in the presence of light. These charged carriers are tiny particles that can move around and cause chemical reactions. When they come into contact with water and oxygen in the environment, they produce substances called reactive oxygen species. These highly active reactive oxygen species can bond to parts of the pollutants and then either decompose the pollutants or turn them into harmless – or even useful – products.

A cube-shaped metal machine with a chamber filled with bright light, and a plate of tubes shown going under the light.
To facilitate the photocatalytic reaction, researchers in the Ahmadi lab put plates of perovskite nanocrystals and pollutants under bright light to see whether the reaction breaks down the pollutants.
Astita Dubey

But some materials used in the photocatalytic process have limitations. For example, they can’t start the reaction unless the light has enough energy – infrared rays with lower energy light, or visible light, won’t trigger the reaction.

Another problem is that the charged particles involved in the reaction can recombine too quickly, which means they join back together before finishing the job. In these cases, the pollutants either do not decompose completely or the process takes a long time to accomplish.

Additionally, the surface of these photocatalysts can sometimes change during or after the photocatalytic reaction, which affects how they work and how efficient they are.

To overcome these limitations, scientists on my team are trying to develop new photocatalytic materials that work efficiently to break down pollutants. We also focus on making sure these materials are nontoxic so that our pollution-cleaning materials aren’t causing further pollution.

A plate of tiny tubes, with some colored dark blue, others light blue, and others transparent.
This plate from the Ahmadi lab is used while testing how perovskite nanocrystals and light break down pollutants, like the blue dye shown. The light blue color indicates partial degradation, while transparent water signifies complete degradation.
Astita Dubey

Teeny tiny crystals

Scientists on my team use automated experimentation and artificial intelligence to figure out which photocatalytic materials could be the best candidates to quickly break down pollutants. We’re making and testing materials called hybrid perovskites, which are tiny crystals – they’re about a 10th the thickness of a strand of hair.

These nanocrystals are made of a blend of organic (carbon-based) and inorganic (non-carbon-based) components.

They have a few unique qualities, like their excellent light-absorbing properties, which come from how they’re structured at the atomic level. They’re tiny, but mighty. Optically, they’re amazing too – they interact with light in fascinating ways to generate a large number of tiny charge carriers and trigger photocatalytic reactions.

These materials efficiently transport electrical charges, which allows them to transport light energy and drive the chemical reactions. They’re also used to make solar panels more efficient and in LED lights, which create the vibrant displays you see on TV screens.

There are thousands of potential types of hybrid nanocrystals. So, my team wanted to figure out how to make and test as many as we can quickly, to see which are the best candidates for cleaning up toxic pollutants.

Bringing in robots

Instead of making and testing samples by hand – which takes weeks or months – we’re using smart robots, which can produce and test at least 100 different materials within an hour. These small liquid-handling robots can precisely move, mix and transfer tiny amounts of liquid from one place to another. They’re controlled by a computer that guides their acceleration and accuracy.

A researcher in a white lab coat smiling at the camera next to a fume hood, with plates of small tubes inside it.
The Opentrons pipetting robot helps Astita Dubey, a visiting scientist working with the Ahmadi lab, synthesize materials and treat them with organic pollutants to test whether they can break down the pollutants.
Jordan Marshall

We also use machine learning to guide this process. Machine learning algorithms can analyze test data quickly and then learn from that data for the next set of experiments executed by the robots. These machine learning algorithms can quickly identify patterns and insights in collected data that would normally take much longer for a human eye to catch.

Our approach aims to simplify and better understand complex photocatalytic systems, helping to create new strategies and materials. By using automated experimentation guided by machine learning, we can now make these systems easier to analyze and interpret, overcoming challenges that were difficult with traditional methods.The Conversation

About the Author:

Mahshid Ahmadi, Assistant Professor of Materials Science and Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Quantum information theorists are shedding light on entanglement, one of the spooky mysteries of quantum mechanics

By William Mark Stuckey, Elizabethtown College 

The year 2025 marks the 100th anniversary of the birth of quantum mechanics. In the century since the field’s inception, scientists and engineers have used quantum mechanics to create technologies such as lasers, MRI scanners and computer chips.

Today, researchers are looking toward building quantum computers and ways to securely transfer information using an entirely new sister field called quantum information science.

But despite creating all these breakthrough technologies, physicists and philosophers who study quantum mechanics still haven’t come up with the answers to some big questions raised by the field’s founders. Given recent developments in quantum information science, researchers like me are using quantum information theory to explore new ways of thinking about these unanswered foundational questions. And one direction we’re looking into relates Albert Einstein’s relativity principle to the qubit.

Quantum computers

Quantum information science focuses on building quantum computers based on the quantum “bit” of information, or qubit. The qubit is historically grounded in the discoveries of physicists Max Planck and Einstein. They instigated the development of quantum mechanics in 1900 and 1905, respectively, when they discovered that light exists in discrete, or “quantum,” bundles of energy.

These quanta of energy also come in small forms of matter, such as atoms and electrons, which make up everything in the universe. It is the odd properties of these tiny packets of matter and energy that are responsible for the computational advantages of the qubit.

A computer based on a quantum bit rather than a classical bit could have a significant computing advantage. And that’s because a classical bit produces a binary response – either a 1 or a 0 – to only one query.

In contrast, the qubit produces a binary response to infinitely many queries using the property of quantum superposition. This property allows researchers to connect multiple qubits in what’s called a quantum entangled state. Here, the entangled qubits act collectively in a way that arrays of classical bits cannot.

That means a quantum computer can do some calculations much faster than an ordinary computer. For example, one device reportedly used 76 entangled qubits to solve a sampling problem 100 trillion times faster than a classical computer.

But the exact force or principle of nature responsible for this quantum entangled state that underlies quantum computing is a big unanswered question. A solution that my colleagues and I in quantum information theory have proposed has to do with Einstein’s relativity principle.

Quantum superposition and entanglement allow qubits to contain far more information than classical bits.

Quantum information theory

The relativity principle says that the laws of physics are the same for all observers, regardless of where they are in space, how they’re oriented or how they’re moving relative to each other. My team showed how to use the relativity principle in conjunction with the principles of quantum information theory to account for quantum entangled particles.

Quantum information theorists like me think about quantum mechanics as a theory of information principles rather than a theory of forces. That’s very different than the typical approach to quantum physics, in which force and energy are important concepts for doing the calculations. In contrast, quantum information theorists don’t need to know what sort of physical force might be causing the mysterious behavior of entangled quantum particles.

That gives us an advantage for explaining quantum entanglement because, as physicist John Bell proved in 1964, any explanation for quantum entanglement in terms of forces requires what Einstein called “spooky actions at a distance.”

That’s because the measurement outcomes of the two entangled quantum particles are correlated – even if those measurements are done at the same time and the particles are physically separated by a vast distance. So, if a force is causing quantum entanglement, it would have to act faster than the speed of light. And a faster-than-light force violates Einstein’s theory of special relativity.

Quantum entanglement is important to quantum computing.

Many researchers are trying to find an explanation for quantum entanglement that doesn’t require spooky actions at a distance, like my team’s proposed solution.

Classical and quantum entanglement

In entanglement, you can know something about two particles collectively – call them particle 1 and particle 2 – so that when you measure particle 1, you immediately know something about particle 2.

Imagine you’re mailing two friends, whom physicists typically call Alice and Bob, each one glove from the same pair of gloves. When Alice opens her box and sees a left-hand glove, she’ll know immediately that when Bob opens the other box he will see the right-hand glove. Each box and glove combination produces one of two outcomes, either a right-hand glove or a left-hand glove. There’s only one possible measurement – opening the box – so Alice and Bob have entangled classical bits of information.

But in quantum entanglement the situation involves entangled qubits, which behave very differently than classical bits.

Qubit behavior

Consider a property of electrons called spin. When you measure an electron’s spin using magnets that are oriented vertically, you always get a spin that’s up or down, nothing in between. That’s a binary measurement outcome, so this is a bit of information.

Two diagrams showing electrons passing through magnets. The top diagram shows one on top and one below the electrons' path. The electrons are either deflected up or down, as indicated by the split paths, after passing through the magnet. The bottom diagram shows two magnets, one on the left and one on the right of the electrons' path. The electrons are either deflected left or right, as indicated by the split paths, after passing through the magnet.
Two magnets oriented vertically can measure an electron’s vertical spin. After moving through the magnets, the electron is deflected either up or down. Similarly, two magnets oriented horizontally can measure an electron’s horizontal spin. After moving through the magnets, the electron is deflected either left or right.
Timothy McDevitt

If you turn the magnets on their sides to measure an electron’s spin horizontally, you always get a spin that’s left or right, nothing in between. The vertical and horizontal orientations of the magnets constitute two different measurements of this same bit. So, electron spin is a qubit – it produces a binary response to multiple measurements.

Quantum superposition

Now suppose you first measure an electron’s spin vertically and find it is up, then you measure its spin horizontally. When you stand straight up, you don’t move to your right or your left at all. So, if I measure how much you move side to side as you stand straight up, I’ll get zero.

That’s exactly what you might expect for the vertical spin up electrons. Since they have vertically oriented spin up, analogous to standing straight up, they should not have any spin left or right horizontally, analogous to moving side to side.

Surprisingly, physicists have found that half of them are horizontally right and half are horizontally left. Now it doesn’t seem to make sense that a vertical spin up electron has left spin (-1) and right spin (+1) outcomes when measured horizontally, just as we expect no side-to-side movement when standing straight up.

But when you add up all the left (-1) and right (+1) spin outcomes you do get zero, as we expected in the horizontal direction when our spin state is vertical spin up. So, on average, it’s like having no side-to-side or horizontal movement when we stand straight up.

This 50-50 ratio over the binary (+1 and -1) outcomes is what physicists are talking about when they say that a vertical spin up electron is in a quantum superposition of horizontal spins left and right.

Entanglement from the relativity principle

According to quantum information theory, all of quantum mechanics, to include its quantum entangled states, is based on the qubit with its quantum superposition.

What my colleagues and I proposed is that this quantum superposition results from the relativity principle, which (again) states the laws of physics are the same for all observers with different orientations in space.

If the electron with a vertical spin in the up direction were to pass straight through the horizontal magnets as you might expect, it would have no spin horizontally. This would violate the relativity principle, which says the particle should have a spin regardless of whether it’s being measured in the horizontal or vertical direction.

Because an electron with a vertical spin in the up direction does have a spin when measured horizontally, quantum information theorists can say that the relativity principle is (ultimately) responsible for quantum entanglement.

And since there is no force used in this principle explanation, there are none of the “spooky actions at a distance” that Einstein derided.

With quantum entanglement’s technological implications for quantum computing firmly established, it’s nice to know that one big question about its origin may be answered with a highly regarded physics principle.The Conversation

About the Author:

William Mark Stuckey, Professor of Physics, Elizabethtown College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI pioneers want bots to replace human teachers – here’s why that’s unlikely

By Annette Vee, University of Pittsburgh 

OpenAI co-founder Andrej Karpathy envisions a world in which artificial intelligence bots can be made into subject matter experts that are “deeply passionate, great at teaching, infinitely patient and fluent in all of the world’s languages.” Through this vision, the bots would be available to “personally tutor all 8 billion of us on demand.”

The embodiment of that idea is his latest venture, Eureka Labs, which is merely the newest prominent example of how tech entrepreneurs are seeking to use AI to revolutionize education.

Karpathy believes AI can solve a long-standing challenge: the scarcity of good teachers who are also subject experts.

And he’s not alone. OpenAI CEO Sam Altman, Khan Academy CEO Sal Khan, venture capitalist Marc Andreessen and University of California, Berkeley computer scientist Stuart Russell also dream of bots becoming on-demand tutors, guidance counselors and perhaps even replacements for human teachers.

As a researcher focused on AI and other new writing technologies, I’ve seen many cases of high-tech “solutions” for teaching problems that fizzled. AI certainly may enhance aspects of education, but history shows that bots probably won’t be an effective substitute for humans. That’s because students have long shown resistance to machines, however sophisticated, and a natural preference to connect with and be inspired by fellow humans.

The costly challenge of teaching writing to the masses

As the director of the English Composition program at the University of Pittsburgh, I oversee instruction for some 7,000 students a year. Programs like mine have long wrestled with how to teach writing efficiently and effectively to so many people at once.

The best answer so far is to keep class sizes to no more than 15 students. Research shows that students learn writing better in smaller classes because they are more engaged.

Yet small classes require more instructors, and that can get expensive for school districts and colleges.

Resuscitating dead scholars

Enter AI. Imagine, Karpathy posits, that the great theoretical physicist Richard Feynman, who has been dead for over 35 years, could be brought back to life as a bot to tutor students.

For Karpathy, an ideal learning experience would be working through physics material “together with Feynman, who is there to guide you every step of the way.” Feynman, renowned for his accessible way of presenting theoretical physics, could work with an unlimited number of students at the same time.

In this vision, human teachers still design course materials, but they are supported by an AI teaching assistant. This teacher-AI team “could run an entire curriculum of courses on a common platform,” Karpathy wrote. “If we are successful, it will be easy for anyone to learn anything,” whether it be a lot of people learning about one subject, or one person learning about many subjects.

Other efforts to personalize learning fall short

Yet technologies for personal learning aren’t new. Exactly 100 years ago, at the 1924 meeting of the American Psychological Association, inventor Sidney Pressey unveiled an “automatic teacher” made out of typewriter parts that asked multiple-choice questions.

In the 1950s, the psychologist B. F. Skinner designed “teaching machines.” If a student answered a question correctly, the machine advanced to ask about the problem’s next step. If not, the student stayed on that step of the problem until they solved it.

In both cases, students received positive feedback for correct answers. This gave them confidence as well as skills in the subject. The problem was that students didn’t learn much – they also found these nonhuman approaches boring, education writer Audrey Watters documents in “Teaching Machines.”

More recently, the world of education saw the rise and fall of “massive open online courses,” or MOOCs. These classes, which delivered video and quizzes, were heralded by The New York Times and others for their promise of democratizing education. Again, students lost interest and logged off.

Other web-based efforts have popped up, including course platforms like Coursera and Outlier. But the same problem persists: There’s no genuine interactivity to keep students engaged. One of the latest casualties in online learning was 2U, which acquired leading MOOC company edX in 2021 and in July 2024 filed for bankruptcy restructuring to reduce its US$945 million debt load. The culprit: falling demand for services.

Now comes the proliferation of AI-fueled platforms. Khanmigo deploys AI tutors to, as Sal Khan writes in his latest book, “personalize and customize coaching, as well as adapt to an individual’s needs while hovering beside our learners as they work.”

The educational publisher Pearson, too, is integrating AI into its educational materials. More than 1,000 universities are adopting these materials for fall 2024.

AI in education isn’t just coming; it’s here. The question is how effective it will be.

Drawbacks in AI learning

Some tech leaders believe bots can customize teaching and replace human teachers and tutors, but they’re likely to face the same problem as these earlier attempts: Students may not like it.

There are important reasons why, too. Students are unlikely to be inspired and excited the way they can be by a live instructor. Students in crisis often turn to trusted adults like teachers and coaches for help. Would they do the same with a bot? And what would the bot do if they did? We don’t know yet.

A lack of data privacy and security can also be a deterrent. These platforms collect volumes of information on students and their academic performance that can be misused or sold. Legislation may try to prevent this, but some popular platforms are based in China, out of reach of U.S. law.

Finally, there are concerns even if AI tutors and teachers become popular. If a bot teaches millions of students at once, we may lose diversity of thought. Where does originality come from when everyone receives the same teachings, especially if “academic success” relies on regurgitating what the AI instructor says?

The idea of an AI tutor in every pocket sounds exciting. I would love to learn physics from Richard Feynman or writing from Maya Angelou or astronomy from Carl Sagan. But history reminds us to be cautious and keep a close eye on whether students are actually learning. The promises of personalized learning are no guarantee for positive results.The Conversation

About the Author:

Annette Vee, Associate Professor of English, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI supercharges data center energy use – straining the grid and slowing sustainability efforts

By Ayse Coskun, Boston University 

The artificial intelligence boom has had such a profound effect on big tech companies that their energy consumption, and with it their carbon emissions, have surged.

The spectacular success of large language models such as ChatGPT has helped fuel this growth in energy demand. At 2.9 watt-hours per ChatGPT request, AI queries require about 10 times the electricity of traditional Google queries, according to the Electric Power Research Institute, a nonprofit research firm. Emerging AI capabilities such as audio and video generation are likely to add to this energy demand.

The energy needs of AI are shifting the calculus of energy companies. They’re now exploring previously untenable options, such as restarting a nuclear reactor at the Three Mile Island power plant, site of the infamous disaster in 1979, that has been dormant since 2019.

Data centers have had continuous growth for decades, but the magnitude of growth in the still-young era of large language models has been exceptional. AI requires a lot more computational and data storage resources than the pre-AI rate of data center growth could provide.

AI and the grid

Thanks to AI, the electrical grid – in many places already near its capacity or prone to stability challenges – is experiencing more pressure than before. There is also a substantial lag between computing growth and grid growth. Data centers take one to two years to build, while adding new power to the grid requires over four years.

As a recent report from the Electric Power Research Institute lays out, just 15 states contain 80% of the data centers in the U.S.. Some states – such as Virginia, home to Data Center Alley – astonishingly have over 25% of their electricity consumed by data centers. There are similar trends of clustered data center growth in other parts of the world. For example, Ireland has become a data center nation.

AI is having a big impact on the electrical grid and, potentially, the climate.

Along with the need to add more power generation to sustain this growth, nearly all countries have decarbonization goals. This means they are striving to integrate more renewable energy sources into the grid. Renewables such as wind and solar are intermittent: The wind doesn’t always blow and the sun doesn’t always shine. The dearth of cheap, green and scalable energy storage means the grid faces an even bigger problem matching supply with demand.

Additional challenges to data center growth include increasing use of water cooling for efficiency, which strains limited fresh water sources. As a result, some communities are pushing back against new data center investments.

Better tech

There are several ways the industry is addressing this energy crisis. First, computing hardware has gotten substantially more energy efficient over the years in terms of the operations executed per watt consumed. Data centers’ power use efficiency, a metric that shows the ratio of power consumed for computing versus for cooling and other infrastructure, has been reduced to 1.5 on average, and even to an impressive 1.2 in advanced facilities. New data centers have more efficient cooling by using water cooling and external cool air when it’s available.

Unfortunately, efficiency alone is not going to solve the sustainability problem. In fact, Jevons paradox points to how efficiency may result in an increase of energy consumption in the longer run. In addition, hardware efficiency gains have slowed down substantially, as the industry has hit the limits of chip technology scaling.

To continue improving efficiency, researchers are designing specialized hardware such as accelerators, new integration technologies such as 3D chips, and new chip cooling techniques.

Similarly, researchers are increasingly studying and developing data center cooling technologies. The Electric Power Research Institute report endorses new cooling methods, such as air-assisted liquid cooling and immersion cooling. While liquid cooling has already made its way into data centers, only a few new data centers have implemented the still-in-development immersion cooling.

Flexible future

A new way of building AI data centers is flexible computing, where the key idea is to compute more when electricity is cheaper, more available and greener, and less when it’s more expensive, scarce and polluting.

Data center operators can convert their facilities to be a flexible load on the grid. Academia and industry have provided early examples of data center demand response, where data centers regulate their power depending on power grid needs. For example, they can schedule certain computing tasks for off-peak hours.

Implementing broader and larger scale flexibility in power consumption requires innovation in hardware, software and grid-data center coordination. Especially for AI, there is much room to develop new strategies to tune data centers’ computational loads and therefore energy consumption. For example, data centers can scale back accuracy to reduce workloads when training AI models.

Realizing this vision requires better modeling and forecasting. Data centers can try to better understand and predict their loads and conditions. It’s also important to predict the grid load and growth.

The Electric Power Research Institute’s load forecasting initiative involves activities to help with grid planning and operations. Comprehensive monitoring and intelligent analytics – possibly relying on AI – for both data centers and the grid are essential for accurate forecasting.

On the edge

The U.S. is at a critical juncture with the explosive growth of AI. It is immensely difficult to integrate hundreds of megawatts of electricity demand into already strained grids. It might be time to rethink how the industry builds data centers.

One possibility is to sustainably build more edge data centers – smaller, widely distributed facilities – to bring computing to local communities. Edge data centers can also reliably add computing power to dense, urban regions without further stressing the grid. While these smaller centers currently make up 10% of data centers in the U.S., analysts project the market for smaller-scale edge data centers to grow by over 20% in the next five years.

Along with converting data centers into flexible and controllable loads, innovating in the edge data center space may make AI’s energy demands much more sustainable.

This article has been updated to correct an editing error about the date Three Mile Island’s Unit 1 nuclear reactor was shut down.The Conversation

About the Author:

Ayse Coskun, Professor of Electrical and Computer Engineering, Boston University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Verifying facts in the age of AI – librarians offer 5 strategies

By Tracy Bicknell-Holmes, Boise State University; Elaine Watson, Boise State University, and Memo Cordova, Boise State University 

The phenomenal growth in artificial intelligence tools has made it easy to create a story quickly, complicating a reader’s ability to determine if a news source or article is truthful or reliable. For instance, earlier this year, people were sharing an article about the supposed suicide of Israeli Prime Minister Benjamin Netanyahu’s psychiatrist as if it were real. It ended up being an AI-generated rewrite of a satirical piece from 2010.

The problem is widespread. According to a 2021 Pearson Institute/AP-NORC poll, “Ninety-five percent of Americans believe the spread of misinformation is a problem.” The Pearson Institute researches methods to reduce global conflicts.

As library scientists, we combat the increase in misinformation by teaching a number of ways to validate the accuracy of an article. These methods include the SIFT Method (Stop, Investigate, Find, Trace), the P.R.O.V.E.N. Source Evaluation method (Purpose, Relevance, Objectivity, Verifiability, Expertise and Newness), and lateral reading.

Lateral reading is a strategy for investigating a source by opening a new browser tab to conduct a search and consult other sources. Lateral reading involves cross-checking the information by researching the source rather than scrolling down the page.

Here are five techniques based on these methods to help readers determine news facts from fiction:

1. Research the author or organization

Search for information beyond the entity’s own website. What are others saying about it? Are there any red flags that lead you to question its credibility? Search the entity’s name in quotation marks in your browser and look for sources that critically review the organization or group. An organization’s “About” page might tell you who is on their board, their mission and their nonprofit status, but this information is typically written to present the organization in a positive light.

The P.R.O.V.E.N. Source Evaluation method includes a section called “Expertise,” which recommends that readers check the author’s credentials and affiliations. Do the authors have advanced degrees or expertise related to the topic? What else have they written? Who funds the organization and what are their affiliations? Do any of these affiliations reveal a potential conflict of interest? Might their writings be biased in favor of one particular viewpoint?

If any of this information is missing or questionable, you may want to stay away from this author or organization.

2. Use good search techniques

Become familiar with search techniques available in your favorite web browser, such as searching keywords rather than full sentences and limiting searches by domain names, such as .org, .gov, or .edu.

Another good technique is putting two or more words in quotation marks so the search engine finds the words next to each other in that order, such as “Pizzagate conspiracy.” This leads to more relevant results.

In an article published in Nature, a team of researchers wrote that “77% of search queries that used the headline or URL of a false/misleading article as a search query return at least one unreliable news link among the top ten results.”

A more effective search would be to identify the key concepts in the headline in question and search those individual words as keywords. For example, if the headline is “Video Showing Alien at Miami Mall Sparks Claims of Invasion,” readers could search: “Alien invasion” Miami mall.

3. Verify the source

Verify the original sources of the information. Was the information cited, paraphrased or quoted accurately? Can you find the same facts or statements in the original source? Purdue Global, Purdue University’s online university for working adults, recommends verifying citations and references that can also apply to news stories by checking that the sources are “easy to find, easy to access, and not outdated.” It also recommends checking the original studies or data cited for accuracy.

The SIFT Method echoes this in its recommendation to “trace claims, quotes, and media to the original context.” You cannot assume that re-reporting is always accurate.

4. Use fact-checking websites

Search fact-checking websites such as InfluenceWatch.org, Poynter.org, Politifact.com or Snopes.com to verify claims. What conclusions did the fact-checkers reach about the accuracy of the claims?

A Harvard Kennedy School Misinformation Review article found that the “high level of agreement” between fact-checking sites “enhances the credibility of fact checkers in the eyes of the public.”

5. Pause and reflect

Pause and reflect to see if what you have read has triggered a strong emotional response. An article in the journal Cognitive Research indicates that news items that cause strong emotions increase our tendency “to believe fake news stories.”

One online study found that the simple act of “pausing to think” and reflect on whether a headline is true or false may prevent a person from sharing false information. While the study indicated that pausing only decreases intentions to share by a small amount – 0.32 points on a 6-point scale – the authors argue that this could nonetheless cut down on the spread of fake news on social media.

Knowing how to identify and check for misinformation is an important part of being a responsible digital citizen. This skill is all the more important as AI becomes more prevalent.The Conversation

About the Authors:

Tracy Bicknell-Holmes, Library professor, Boise State University; Elaine Watson, Librarian and Associate Professor, Boise State University, and Memo Cordova, Library associate professor, Boise State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

From diagnosing brain disorders to cognitive enhancement, 100 years of EEG have transformed neuroscience

By Erika Nyhus, Bowdoin College 

Electroencephalography, or EEG, was invented 100 years ago. In the years since the invention of this device to monitor brain electricity, it has had an incredible impact on how scientists study the human brain.

Since its first use, the EEG has shaped researchers’ understanding of cognition, from perception to memory. It has also been important for diagnosing and guiding treatment of multiple brain disorders, including epilepsy.

I am a cognitive neuroscientist who uses EEG to study how people remember events from their past. The EEG’s 100-year anniversary is an opportunity to reflect on this discovery’s significance in neuroscience and medicine.

Discovery of EEG

On July 6, 1924, psychiatrist Hans Berger performed the first EEG recording on a human, a 17-year-old boy undergoing neurosurgery. At the time, Berger and other researchers were performing electrical recordings on the brains of animals.

What set Berger apart was his obsession with finding the physical basis of what he called psychic energy, or mental effort, in people. Through a series of experiments spanning his early career, Berger measured brain volume and temperature to study changes in mental processes such as intellectual work, attention and desire.

He then turned to recording electrical activity. Though he recorded the first traces of EEG in the human brain in 1924, he did not publish the results until 1929. Those five intervening years were a tortuous phase of self-doubt about the source of the EEG signal in the brain and refining the experimental setup. Berger recorded hundreds of EEGs on multiple subjects, including his own children, with both experimental successes and setbacks.

This is among the first EEG readings published in Hans Berger's study. The top trace is the EGG while the bottom is a reference trace of 10 Hz.
Two EEG traces, the top more irregular in rhythm than the bottom.
Hans Berger/Über das Elektrenkephalogramm des Menchen. Archives für Psychiatrie. 1929; 87:527-70 via Wikimedia Commons

Finally convinced of his results, he published a series of papers in the journal Archiv für Psychiatrie and had hopes of winning a Nobel Prize. Unfortunately, the research community doubted his results, and years passed before anyone else started using EEG in their own research.

Berger was eventually nominated for a Nobel Prize in 1940. But Nobels were not awarded that year in any category due to World War II and Germany’s occupation of Norway.

Neural oscillations

When many neurons are active at the same time, they produce an electrical signal strong enough to spread instantaneously through the conductive tissue of the brain, skull and scalp. EEG electrodes placed on the head can record these electrical signals.

Since the discovery of EEG, researchers have shown that neural activity oscillates at specific frequencies. In his initial EEG recordings in 1924, Berger noted the predominance of oscillatory activity that cycled eight to 12 times per second, or 8 to 12 hertz, named alpha oscillations. Since the discovery of alpha rhythms, there have been many attempts to understand how and why neurons oscillate.

Neural oscillations are thought to be important for effective communication between specialized brain regions. For example, theta oscillations that cycle at 4 to 8 hertz are important for communication between brain regions involved in memory encoding and retrieval in animals and humans.

Researchers then examined whether they could alter neural oscillations and therefore affect how neurons talk to each other. Studies have shown that many behavioral and noninvasive methods can alter neural oscillations and lead to changes in cognitive performance. Engaging in specific mental activities can induce neural oscillations in the frequencies those mental activities use. For example, my team’s research found that mindfulness meditation can increase theta frequency oscillations and improve memory retrieval.

Noninvasive brain stimulation methods can target frequencies of interest. For example, my team’s ongoing research found that brain stimulation at theta frequency can lead to improved memory retrieval.

EEG has also led to major discoveries about how the brain processes information in many other cognitive domains, including how people perceive the world around them, how they focus their attention, how they communicate through language and how they process emotions.

Diagnosing and treating brain disorders

EEG is commonly used today to diagnose sleep disorders and epilepsy and to guide brain disorder treatments.

Scientists are using EEG to see whether memory can be improved with noninvasive brain stimulation. Although the research is still in its infancy, there have been some promising results. For example, one study found that noninvasive brain stimulation at gamma frequency – 25 hertz – improved memory and neurotransmitter transmission in Alzheimer’s disease.

A new type of noninvasive brain stimulation called temporal interference uses two high frequencies to cause neural activity equal to the difference between the stimulation frequencies. The high frequencies can better penetrate the brain and reach the targeted area. Researchers recently tested this method in people using 2,000 hertz and 2,005 hertz to send 5 hertz theta frequency at a key brain region for memory, the hippocampus. This led to improvements in remembering the name associated with a face.

Although these results are promising, more research is needed to understand the exact role neural oscillations play in cognition and whether altering them can lead to long-lasting cognitive enhancement.

The future of EEG

The 100-year anniversary of the EEG provides an opportunity to consider what it has taught us about brain function and what this technique can do in the future.

In a survey commissioned by the journal Nature Human Behaviour, over 500 researchers who use EEG in their work were asked to make predictions on the future of the technique. What will be possible in the next 100 years of EEG?

Some researchers, including myself, predict that we’ll use EEG to diagnose and create targeted treatments for brain disorders. Others anticipate that an affordable, wearable EEG will be widely used to enhance cognitive function at home or will be seamlessly integrated into virtual reality applications. The possibilities are vast.The Conversation

About the Author:

Erika Nyhus, Associate Professor of Psychology and Neuroscience, Bowdoin College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Quantum computers are like kaleidoscopes – why unusual metaphors help illustrate science and technology

By Sorin Adam Matei, Purdue University 

Quantum computing is like Forrest Gump’s box of chocolates: You never know what you’re gonna get. Quantum phenomena – the behavior of matter and energy at the atomic and subatomic levels – are not definite, one thing or another. They are opaque clouds of possibility or, more precisely, probabilities. When someone observes a quantum system, it loses its quantum-ness and “collapses” into a definite state.

Quantum phenomena are mysterious and often counterintuitive. This makes quantum computing difficult to understand. People naturally reach for the familiar to attempt to explain the unfamiliar, and for quantum computing this usually means using traditional binary computing as a metaphor. But explaining quantum computing this way leads to major conceptual confusion, because at a base level the two are entirely different animals.

This problem highlights the often mistaken belief that common metaphors are more useful than exotic ones when explaining new technologies. Sometimes the opposite approach is more useful. The freshness of the metaphor should match the novelty of the discovery.

The uniqueness of quantum computers calls for an unusual metaphor. As a communications researcher who studies technology, I believe that quantum computers can be better understood as kaleidoscopes.

This image could give you a better grasp of how quantum computers work.
Crystal A Murray/Flickr, CC BY-NC-SA

Digital certainty vs. quantum probabilities

The gap between understanding classical and quantum computers is a wide chasm. Classical computers store and process information via transistors, which are electronic devices that take binary, deterministic states: one or zero, yes or no. Quantum computers, in contrast, handle information probabilistically at the atomic and subatomic levels.

Classical computers use the flow of electricity to sequentially open and close gates to record or manipulate information. Information flows through circuits, triggering actions through a series of switches that record information as ones and zeros. Using binary math, bits are the foundation of all things digital, from the apps on your phone to the account records at your bank and the Wi-Fi signals bouncing around your home.

In contrast, quantum computers use changes in the quantum states of atoms, ions, electrons or photons. Quantum computers link, or entangle, multiple quantum particles so that changes to one affect all the others. They then introduce interference patterns, like multiple stones tossed into a pond at the same time. Some waves combine to create higher peaks, while some waves and troughs combine to cancel each other out. Carefully calibrated interference patterns guide the quantum computer toward the solution of a problem.

Physicist Katie Mack explains quantum probability.

Achieving a quantum leap, conceptually

The term “bit” is a metaphor. The word suggests that during calculations, a computer can break up large values into tiny ones – bits of information – which electronic devices such as transistors can more easily process.

Using metaphors like this has a cost, though. They are not perfect. Metaphors are incomplete comparisons that transfer knowledge from something people know well to something they are working to understand. The bit metaphor ignores that the binary method does not deal with many types of different bits at once, as common sense might suggest. Instead, all bits are the same.

The smallest unit of a quantum computer is called the quantum bit, or qubit. But transferring the bit metaphor to quantum computing is even less adequate than using it for classical computing. Transferring a metaphor from one use to another blunts its effect.

The prevalent explanation of quantum computing is that while classical computers can store or process only a zero or one in a transistor or other computational unit, quantum computers supposedly store and handle both zero and one and other values in between at the same time through the process of superposition.

Superposition, however, does not store one or zero or any other number simultaneously. There is only an expectation that the values might be zero or one at the end of the computation. This quantum probability is the polar opposite of the binary method of storing information.

Driven by quantum science’s uncertainty principle, the probability that a qubit stores a one or zero is like Schroedinger’s cat, which can be either dead or alive, depending on when you observe it. But the two different values do not exist simultaneously during superposition. They exist only as probabilities, and an observer cannot determine when or how frequently those values existed before the observation ended the superposition.

Leaving behind these challenges to using traditional binary computing metaphors means embracing new metaphors to explain quantum computing.

Peering into kaleidoscopes

The kaleidoscope metaphor is particularly apt to explain quantum processes. Kaleidoscopes can create infinitely diverse yet orderly patterns using a limited number of colored glass beads, mirror-dividing walls and light. Rotating the kaleidoscope enhances the effect, generating an infinitely variable spectacle of fleeting colors and shapes.

The shapes not only change but can’t be reversed. If you turn the kaleidoscope in the opposite direction, the imagery will generally remain the same, but the exact composition of each shape or even their structures will vary as the beads randomly mingle with each other. In other words, while the beads, light and mirrors could replicate some patterns shown before, these are never absolutely the same.

If you don’t have a kaleidoscope handy, this video is a good substitute.

Using the kaleidoscope metaphor, the solution a quantum computer provides – the final pattern – depends on when you stop the computing process. Quantum computing isn’t about guessing the state of any given particle but using mathematical models of how the interaction among many particles in various states creates patterns, called quantum correlations.

Each final pattern is the answer to a problem posed to the quantum computer, and what you get in a quantum computing operation is a probability that a certain configuration will result.

New metaphors for new worlds

Metaphors make the unknown manageable, approachable and discoverable. Approximating the meaning of a surprising object or phenomenon by extending an existing metaphor is a method that is as old as calling the edge of an ax its “bit” and its flat end its “butt.” The two metaphors take something we understand from everyday life very well, applying it to a technology that needs a specialized explanation of what it does. Calling the cutting edge of an ax a “bit” suggestively indicates what it does, adding the nuance that it changes the object it is applied to. When an ax shapes or splits a piece of wood, it takes a “bite” from it.

Metaphors, however, do much more than provide convenient labels and explanations of new processes. The words people use to describe new concepts change over time, expanding and taking on a life of their own.

When encountering dramatically different ideas, technologies or scientific phenomena, it’s important to use fresh and striking terms as windows to open the mind and increase understanding. Scientists and engineers seeking to explain new concepts would do well to seek out originality and master metaphors – in other words, to think about words the way poets do.The Conversation

About the Author:

Sorin Adam Matei, Associate Dean for Research, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New database features 250 AI tools that can enhance social science research

By Megan Stubbs-Richardson, Mississippi State University; Devon Brenner, Mississippi State University; Lauren Etheredge, Mississippi State University, and MacKenzie Paul, Baylor University 

AI – or artificial intelligence – is often used as a way to summarize data and improve writing. But AI tools also represent a powerful and efficient way to analyze large amounts of text to search for patterns. In addition, AI tools can assist with developing research products that can be shared widely.

It’s with that in mind that we, as researchers in social science, developed a new database of AI tools for the field. In the database, we compiled information about each tool and documented whether it was useful for literature reviews, data collection and analyses, or research dissemination. We also provided information on the costs, logins and plug-in extensions available for each tool.

When asked about their perceptions of AI, many social scientists express caution or apprehension. In a sample of faculty and students from over 600 institutions, only 22% of university faculty reported that they regularly used AI tools.

From combing through lengthy transcripts or text-based data to writing literature reviews and sharing results, we believe AI can help social science researchers – such as those in psychology, sociology and communication – as well as others get the most out of their data and present it to a wider audience.

Analyze text using AI

Qualitative research often involves poring over transcripts or written language to identify themes and patterns. While this kind of research is powerful, it is also labor-intensive. The power of AI platforms to sift through large datasets not only saves researchers time, but it can also help them analyze data that couldn’t have been analyzed previously because of the size of the dataset.

Specifically, AI can assist social scientists by identifying potential themes or common topics in large, text-based data that scientists can interrogate using qualitative research methods. For example, AI can analyze 15 million social media posts to identify themes in how people coped with COVID-19. These themes can then give researchers insight into larger trends in the data, allowing us to refine criteria for a more in-depth, qualitative analysis.

AI tools can also be used to adapt language and scientists’ word choice in research designs. In particular, AI can reduce bias by improving the wording of questions in surveys or refining keywords used in social media data collection.

Identify gaps in knowledge

Another key task in research is to scan the field for previous work to identify gaps in knowledge. AI applications are built on systems that can synthesize text. This makes literature reviews – the section of a research paper that summarizes other research on the same topic – and writing processes more efficient.

Research shows that human feedback to AI, such as providing examples of simple logic, can significantly improve the tools’ ability to perform complex reasoning. With this in mind, we can continually revise our instructions to AI and refine its ability to pull relevant literature.

However, social scientists must be wary of fake sources – a big concern with generative AI. It is essential to verify any sources AI tools provide to ensure they come from peer-reviewed journals.

Share research findings

AI tools can quickly summarize research findings in a reader-friendly way by assisting with writing blogs, creating infographics and producing presentation slides and even images.

Our database contains AI tools that can also help scientists present their findings on social media. One tool worth highlighting is BlogTweet. This free AI tool allows users to copy and paste text from an article like this one to generate tweet threads and start conversations.

Be aware of the cost of AI tools

Two-thirds of the tools in the database cost money. While our primary objective was to identify the most useful tools for social scientists, we also sought to identify open-source tools and curated a list of 85 free tools that can support literature reviews, writing, data collection, analysis and visualization efforts.

12 best free AI tools for academic research and researchers.

In our analysis of the cost of AI tools, we also found that many offer “freemium” access to tools. This means you can explore a free version of the product. More advanced versions of the tool are available through the purchase of tokens or subscription plans.

For some tools, costs can be somewhat hidden or unexpected. For instance, a tool that seems open source on the surface may actually have rate limits, and users may find that they’ve run out of free questions to ask the AI.

The future of the database

Since the release of the Artificial Intelligence Applications for Social Science Research Database on Oct. 5, 2023, it has been downloaded over 400 times across 49 countries. In the database, we found 131 AI tools useful for literature reviews, summaries or writing. As many as 146 AI tools are useful for data collection or analysis, and 108 are useful for research dissemination.

We continue to update the database and hope that it can aid academic communities in their exploration of AI and generate new conversations. The more that social scientists use the database, the more they can work toward consensus of adopting ethical approaches to using AI in research and analysis.The Conversation

About the Authors:

Megan Stubbs-Richardson, Assistant Research Professor at the Social Science Research Center, Mississippi State University; Devon Brenner, Professor of education, Mississippi State University; Lauren Etheredge, Research associate in sociology, Mississippi State University, and MacKenzie Paul, Doctoral student in psychology, Baylor University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

AI plus gene editing promises to shift biotech into high gear

By Marc Zimmer, Connecticut College 

During her chemistry Nobel Prize lecture in 2018, Frances Arnold said, “Today we can for all practical purposes read, write and edit any sequence of DNA, but we cannot compose it.” That isn’t true anymore.

Since then, science and technology have progressed so much that artificial intelligence has learned to compose DNA, and with genetically modified bacteria, scientists are on their way to designing and making bespoke proteins.

The goal is that with AI’s designing talents and gene editing’s engineering abilities, scientists can modify bacteria to act as mini factories producing new proteins that can reduce greenhouse gases, digest plastics or act as species-specific pesticides.

As a chemistry professor and computational chemist who studies molecular science and environmental chemistry, I believe that advances in AI and gene editing make this a realistic possibility.

Gene sequencing – reading life’s recipes

All living things contain genetic materials – DNA and RNA – that provide the hereditary information needed to replicate themselves and make proteins. Proteins constitute 75% of human dry weight. They make up muscles, enzymes, hormones, blood, hair and cartilage. Understanding proteins means understanding much of biology. The order of nucleotide bases in DNA, or RNA in some viruses, encodes this information, and genomic sequencing technologies identify the order of these bases.

The Human Genome Project was an international effort that sequenced the entire human genome from 1990 to 2003. Thanks to rapidly improving technologies, it took seven years to sequence the first 1% of the genome and another seven years for the remaining 99%. By 2003, scientists had the complete sequence of the 3 billion nucleotide base pairs coding for 20,000 to 25,000 genes in the human genome.

However, understanding the functions of most proteins and correcting their malfunctions remained a challenge.

AI learns proteins

Each protein’s shape is critical to its function and is determined by the sequence of its amino acids, which is in turn determined by the gene’s nucleotide sequence. Misfolded proteins have the wrong shape and can cause illnesses such as neurodegenerative diseases, cystic fibrosis and Type 2 diabetes. Understanding these diseases and developing treatments requires knowledge of protein shapes.

Before 2016, the only way to determine the shape of a protein was through X-ray crystallography, a laboratory technique that uses the diffraction of X-rays by single crystals to determine the precise arrangement of atoms and molecules in three dimensions in a molecule. At that time, the structure of about 200,000 proteins had been determined by crystallography, costing billions of dollars.

AlphaFold, a machine learning program, used these crystal structures as a training set to determine the shape of the proteins from their nucleotide sequences. And in less than a year, the program calculated the protein structures of all 214 million genes that have been sequenced and published. The protein structures AlphaFold determined have all been released in a freely available database.

To effectively address noninfectious diseases and design new drugs, scientists need more detailed knowledge of how proteins, especially enzymes, bind small molecules. Enzymes are protein catalysts that enable and regulate biochemical reactions.

AI system AlphaFold3 allows scientists to make intricately detailed models of life’s molecular machinery.

AlphaFold3, released May 8, 2024, can predict protein shapes and the locations where small molecules can bind to these proteins. In rational drug design, drugs are designed to bind proteins involved in a pathway related to the disease being treated. The small molecule drugs bind to the protein binding site and modulate its activity, thereby influencing the disease path. By being able to predict protein binding sites, AlphaFold3 will enhance researchers’ drug development capabilities.

AI + CRISPR = composing new proteins

Around 2015, the development of CRISPR technology revolutionized gene editing. CRISPR can be used to find a specific part of a gene, change or delete it, make the cell express more or less of its gene product, or even add an utterly foreign gene in its place.

In 2020, Jennifer Doudna and Emmanuelle Charpentier received the Nobel Prize in chemistry “for the development of a method (CRISPR) for genome editing.” With CRISPR, gene editing, which once took years and was species specific, costly and laborious, can now be done in days and for a fraction of the cost.

AI and genetic engineering are advancing rapidly. What was once complicated and expensive is now routine. Looking ahead, the dream is of bespoke proteins designed and produced by a combination of machine learning and CRISPR-modified bacteria. AI would design the proteins, and bacteria altered using CRISPR would produce the proteins. Enzymes produced this way could potentially breathe in carbon dioxide and methane while exhaling organic feedstocks, or break down plastics into substitutes for concrete.

I believe that these ambitions are not unrealistic, given that genetically modified organisms already account for 2% of the U.S. economy in agriculture and pharmaceuticals.

Two groups have made functioning enzymes from scratch that were designed by differing AI systems. David Baker’s Institute for Protein Design at the University of Washington devised a new deep-learning-based protein design strategy it named “family-wide hallucination,” which they used to make a unique light-emitting enzyme. Meanwhile, biotech startup Profluent, has used an AI trained from the sum of all CRISPR-Cas knowledge to design new functioning genome editors.

If AI can learn to make new CRISPR systems as well as bioluminescent enzymes that work and have never been seen on Earth, there is hope that pairing CRISPR with AI can be used to design other new bespoke enzymes. Although the CRISPR-AI combination is still in its infancy, once it matures it is likely to be highly beneficial and could even help the world tackle climate change.

It’s important to remember, however, that the more powerful a technology is, the greater the risks it poses. Also, humans have not been very successful at engineering nature due to the complexity and interconnectedness of natural systems, which often leads to unintended consequences.The Conversation

About the Author:

Marc Zimmer, Professor of Chemistry, Connecticut College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Cybersecurity researchers spotlight a new ransomware threat – be careful where you upload files

By Selcuk Uluagac, Florida International University 

You probably know better than to click on links that download unknown files onto your computer. It turns out that uploading files can get you into trouble, too.

Today’s web browsers are much more powerful than earlier generations of browsers. They’re able to manipulate data within both the browser and the computer’s local file system. Users can send and receive email, listen to music or watch a movie within a browser with the click of a button.

Unfortunately, these capabilities also mean that hackers can find clever ways to abuse the browsers to trick you into letting ransomware lock up your files when you think that you’re simply doing your usual tasks online.

I’m a computer scientist who studies cybersecurity. My colleagues and I have shown how hackers can gain access to your computer’s files via the File System Access Application Programming Interface (API), which enables web applications in modern browsers to interact with the users’ local file systems.

The threat applies to Google’s Chrome and Microsoft’s Edge browsers but not Apple’s Safari or Mozilla’s Firefox. Chrome accounts for 65% of browsers used, and Edge accounts for 5%. To the best of my knowledge, there have been no reports of hackers using this method so far.

My colleagues, who include a Google security researcher, and I have communicated with the developers responsible for the File System Access API, and they have expressed support for our work and interest in our approaches to defending against this kind of attack. We also filed a security report to Microsoft but have not heard from them.

Double-edged sword

Today’s browsers are almost operating systems unto themselves. They can run software programs and encrypt files. These capabilities, combined with the browser’s access to the host computer’s files – including ones in the cloud, shared folders and external drives – via the File System Access API creates a new opportunity for ransomware.

Imagine you want to edit photos on a benign-looking free online photo editing tool. When you upload the photos for editing, any hackers who control the malicious editing tool can access the files on your computer via your browser. The hackers would gain access to the folder you are uploading from and all subfolders. Then the hackers could encrypt the files in your file system and demand a ransom payment to decrypt them.

Today’s web browsers are more powerful – and in some ways more vulnerable – than their predecessors.

Ransomware is a growing problem. Attacks have hit individuals as well as organizations, including Fortune 500 companies, banks, cloud service providers, cruise operators, threat-monitoring services, chip manufacturers, governments, medical centers and hospitals, insurance companies, schools, universities and even police departments. In 2023, organizations paid more than US$1.1 billion in ransomware payments to attackers, and 19 ransomware attacks targeted organizations every second.

It is no wonder ransomware is the No. 1 arms race today between hackers and security specialists. Traditional ransomware runs on your computer after hackers have tricked you into downloading it.

New defenses for a new threat

A team of researchers I lead at the Cyber-Physical Systems Security Lab at Florida International University, including postdoctoral researcher Abbas Acar and Ph.D. candidate Harun Oz, in collaboration with Google Senior Research Scientist Güliz Seray Tuncay, have been investigating this new type of potential ransomware for the past two years. Specifically, we have been exploring how powerful modern web browsers have become and how they can be weaponized by hackers to create novel forms of ransomware.

In our paper, RøB: Ransomware over Modern Web Browsers, which was presented at the USENIX Security Symposium in August 2023, we showed how this emerging ransomware strain is easy to design and how damaging it can be. In particular, we designed and implemented the first browser-based ransomware called RøB and analyzed its use with browsers running on three different major operating systems – Windows, Linux and MacOS – five cloud providers and five antivirus products.

Our evaluations showed that RøB is capable of encrypting numerous types of files. Because RøB runs within the browser, there are no malicious payloads for a traditional antivirus program to catch. This means existing ransomware detection systems face several issues against this powerful browser-based ransomware.

We proposed three different defense approaches to mitigate this new ransomware type. These approaches operate at different levels – browser, file system and user – and complement one another.

The first approach temporarily halts a web application – a program that runs in the browser – in order to detect encrypted user files. The second approach monitors the activity of the web application on the user’s computer to identify ransomware-like patterns. The third approach introduces a new permission dialog box to inform users about the risks and implications associated with allowing web applications to access their computer’s file system.

When it comes to protecting your computer, be careful about where you upload as well as download files. Your uploads could be giving hackers an “in” to your computer.The Conversation

About the Author:

Selcuk Uluagac, Professor of Computing and Information Science, Florida International University

This article is republished from The Conversation under a Creative Commons license. Read the original article.