Archive for Programming – Page 3

Building fairness into AI is crucial – and hard to get right

By Ferdinando Fioretto, University of Virginia 

Artificial intelligence’s capacity to process and analyze vast amounts of data has revolutionized decision-making processes, making operations in health care, finance, criminal justice and other sectors of society more efficient and, in many instances, more effective.

With this transformative power, however, comes a significant responsibility: the need to ensure that these technologies are developed and deployed in a manner that is equitable and just. In short, AI needs to be fair.

The pursuit of fairness in AI is not merely an ethical imperative but a requirement in order to foster trust, inclusivity and the responsible advancement of technology. However, ensuring that AI is fair is a major challenge. And on top of that, my research as a computer scientist who studies AI shows that attempts to ensure fairness in AI can have unintended consequences.

Why fairness in AI matters

Fairness in AI has emerged as a critical area of focus for researchers, developers and policymakers. It transcends technical achievement, touching on ethical, social and legal dimensions of the technology.

Ethically, fairness is a cornerstone of building trust and acceptance of AI systems. People need to trust that AI decisions that affect their lives – for example, hiring algorithms – are made equitably. Socially, AI systems that embody fairness can help address and mitigate historical biases – for example, those against women and minorities – fostering inclusivity. Legally, embedding fairness in AI systems helps bring those systems into alignment with anti-discrimination laws and regulations around the world.

Unfairness can stem from two primary sources: the input data and the algorithms. Research has shown that input data can perpetuate bias in various sectors of society. For example, in hiring, algorithms processing data that reflects societal prejudices or lacks diversity can perpetuate “like me” biases. These biases favor candidates who are similar to the decision-makers or those already in an organization. When biased data is then used to train a machine learning algorithm to aid a decision-maker, the algorithm can propagate and even amplify these biases.

Why fairness in AI is hard

Fairness is inherently subjective, influenced by cultural, social and personal perspectives. In the context of AI, researchers, developers and policymakers often translate fairness to the idea that algorithms should not perpetuate or exacerbate existing biases or inequalities.

However, measuring fairness and building it into AI systems is fraught with subjective decisions and technical difficulties. Researchers and policymakers have proposed various definitions of fairness, such as demographic parity, equality of opportunity and individual fairness.

Why the concept of algorithmic fairness is so challenging.

These definitions involve different mathematical formulations and underlying philosophies. They also often conflict, highlighting the difficulty of satisfying all fairness criteria simultaneously in practice.

In addition, fairness cannot be distilled into a single metric or guideline. It encompasses a spectrum of considerations including, but not limited to, equality of opportunity, treatment and impact.

Unintended effects on fairness

The multifaceted nature of fairness means that AI systems must be scrutinized at every level of their development cycle, from the initial design and data collection phases to their final deployment and ongoing evaluation. This scrutiny reveals another layer of complexity. AI systems are seldom deployed in isolation. They are used as part of often complex and important decision-making processes, such as making recommendations about hiring or allocating funds and resources, and are subject to many constraints, including security and privacy.

Research my colleagues and I conducted shows that constraints such as computational resources, hardware types and privacy can significantly influence the fairness of AI systems. For instance, the need for computational efficiency can lead to simplifications that inadvertently overlook or misrepresent marginalized groups.

In our study on network pruning – a method to make complex machine learning models smaller and faster – we found that this process can unfairly affect certain groups. This happens because the pruning might not consider how different groups are represented in the data and by the model, leading to biased outcomes.

Similarly, privacy-preserving techniques, while crucial, can obscure the data necessary to identify and mitigate biases or disproportionally affect the outcomes for minorities. For example, when statistical agencies add noise to data to protect privacy, this can lead to unfair resource allocation because the added noise affects some groups more than others. This disproportionality can also skew decision-making processes that rely on this data, such as resource allocation for public services.

These constraints do not operate in isolation but intersect in ways that compound their impact on fairness. For instance, when privacy measures exacerbate biases in data, it can further amplify existing inequalities. This makes it important to have a comprehensive understanding and approach to both privacy and fairness for AI development.

The path forward

Making AI fair is not straightforward, and there are no one-size-fits-all solutions. It requires a process of continuous learning, adaptation and collaboration. Given that bias is pervasive in society, I believe that people working in the AI field should recognize that it’s not possible to achieve perfect fairness and instead strive for continuous improvement.

This challenge requires a commitment to rigorous research, thoughtful policymaking and ethical practice. To make it work, researchers, developers and users of AI will need to ensure that considerations of fairness are woven into all aspects of the AI pipeline, from its conception through data collection and algorithm design to deployment and beyond.The Conversation

About the Author:

Ferdinando Fioretto, Assistant Professor of Computer Science, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Bringing AI up to speed – autonomous auto racing promises safer driverless cars on the road

By Madhur Behl, University of Virginia 

The excitement of auto racing comes from split-second decisions and daring passes by fearless drivers. Imagine that scene, but without the driver – the car alone, guided by the invisible hand of artificial intelligence. Can the rush of racing unfold without a driver steering the course? It turns out that it can.

Enter autonomous racing, a field that’s not just about high-speed competition but also pushing the boundaries of what autonomous vehicles can achieve and improving their safety.

Over a century ago, at the dawn of automobiles, as society shifted from horse-drawn to motor-powered vehicles, there was public doubt about the safety and reliability of the new technology. Motorsport racing was organized to showcase the technological performance and safety of these horseless carriages. Similarly, autonomous racing is the modern arena to prove the reliability of autonomous vehicle technology as driverless cars begin to hit the streets.

Autonomous racing’s high-speed trials mirror the real-world challenges that autonomous vehicles face on streets: adjusting to unexpected changes and reacting in fractions of a second. Mastering these challenges on the track, where speeds are higher and reaction times shorter, leads to safer autonomous vehicles on the road.

Autonomous race cars pass, or ‘overtake,’ others on the Las Vegas Motor Speedway track.

I am a computer science professor who studies artificial intelligence, robotics and autonomous vehicles, and I lead the Cavalier Autonomous Racing team at the University of Virginia. The team competes in the Indy Autonomous Challenge, a global contest where universities pit fully autonomous Indy race cars against each other. Since its 2021 inception, the event has drawn top international teams to prestigious circuits like the Indianapolis Motor Speedway. The field, marked by both rivalry and teamwork, shows that collective problem-solving drives advances in autonomous vehicle safety.

At the Indy Autonomous Challenge passing competition held at the 2024 Consumer Electronics Show in Las Vegas in January 2024, our Cavalier team clinched second place and hit speeds of 143 mph (230 kilometers per hour) while autonomously overtaking another race car, affirming its status as a leading American team. TUM Autonomous Motorsport from the Technical University of Munich won the event.

An autonomous race car built by the Technical University of Munich prepares to pass the University of Virginia’s entrant.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

Pint-size beginnings

The field of autonomous racing didn’t begin with race cars on professional race tracks but with miniature cars at robotics conferences. In 2015, my colleagues and I engineered a 1/10 scale autonomous race car. We transformed a remote-controlled car into a small but powerful research and educational tool, which I named F1tenth, playing on the name of the traditional Formula One, or F1, race car. The F1tenth platform is now used by over 70 institutions worldwide to construct their miniaturized autonomous racers.

The F1tenth Autonomous Racing Grand Prix is now a marquee event at robotics conferences where teams from across the planet gather, each wielding vehicles that are identical in hardware and sensors, to engage in what is essentially an intense “battle of algorithms.” Victory on the track is claimed not by raw power but by the advanced AI algorithms’ control of the cars.

These race cars are small, but the challenges to autonomous driving are sizable.

F1tenth has also emerged as an engaging and accessible gateway for students to delve into robotics research. Over the years, I’ve reached thousands of students via my courses and online lecture series, which explains the process of how to build, drive and autonomously race these vehicles.

Getting real

Today, the scope of our research has expanded significantly, advancing from small-scale models to actual autonomous Indy cars that compete at speeds of upward of 150 mph (241 kph), executing complex overtaking maneuvers with other autonomous vehicles on the racetrack. The cars are built on a modified version of the Indy NXT chassis and are outfitted with sensors and controllers to allow autonomous driving. Indy NXT race cars are used in professional racing and are slightly smaller versions of the Indy cars made famous by the Indianapolis 500.

13 people stand beside a race car in a large empty racing stadium
The Cavalier Autonomous Racing team stands behind their driverless race car.
Cavalier Autonomous Racing, University of Virginia, CC BY-ND

The gritty reality of racing these advanced machines on real racetracks pushes the boundaries of what autonomous vehicles can do. Autonomous racing takes the challenges of robotics and AI to new levels, requiring researchers to refine our understanding of how machines perceive their environment, make safe decisions and control complex maneuvers at a high speed where traditional methods begin to falter.

Precision is critical, and the margin for error in steering and acceleration is razor-thin, requiring a sophisticated grasp and exact mathematical description of the car’s movement, aerodynamics and drivetrain system. In addition, autonomous racing researchers create algorithms that use data from cameras, radar and lidar, which is like radar but with lasers instead of radio waves, to steer around competitors and safely navigate the high-speed and unpredictable racing environment.

My team has shared the world’s first open dataset for autonomous racing, inviting researchers everywhere to join in refining the algorithms that could help define the future of autonomous vehicles.

The data from the competitions is available for other researchers to use.

Crucible for autonomous vehicles

More than just a technological showcase, autonomous racing is a critical research frontier. When autonomous systems can reliably function in these extreme conditions, they inherently possess a buffer when operating in the ordinary conditions of street traffic.

Autonomous racing is a testbed where competition spurs innovation, collaboration fosters growth, and AI-controlled cars racing to the finish line chart a course toward safer autonomous vehicles.The Conversation

About the Author:

Madhur Behl, Associate Professor of Robotics and Artificial Intelligence, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why AI can’t replace air traffic controllers

By Amy Pritchett, Penn State 

After hours of routine operations, an air traffic controller gets a radio call from a small aircraft whose cockpit indicators can’t confirm that the plane’s landing gear is extended for landing. The controller arranges for the pilot to fly low by the tower so the controller can visually check the plane’s landing gear. All appears well. “It looks like your gear is down,” the controller tells the pilot.

The controller calls for the airport fire trucks to be ready just in case, and the aircraft circles back to land safely. Scenarios like this play out regularly. In the air traffic control system, everything must meet the highest levels of safety, but not everything goes according to plan.

Contrast this with the still science-fiction vision of future artificial intelligence “pilots” flying autonomous aircraft, complete with an autonomous air traffic control system handling aircraft as easily as routers shuttling data packets on the internet.

I’m an aerospace engineer who led a National Academies study ordered by Congress about air traffic controller staffing. Researchers are continually working on new technologies that automate elements of the air traffic control system, but technology can execute only those functions that are planned for during its design and so can’t modify standard procedures. As the scenario above illustrates, humans are likely to remain a necessary central component of air traffic control for a long time to come.

What air traffic controllers do

The Federal Aviation Administration’s fundamental guidance for the responsibility of air traffic controllers states: “The primary purpose of the air traffic control system is to prevent a collision involving aircraft.” Air traffic controllers are also charged with providing “a safe, orderly and expeditious flow of air traffic” and other services supporting safety, such as helping pilots avoid mountains and other hazardous terrain and hazardous weather, to the extent they can.

Air traffic controllers’ jobs vary. Tower controllers provide the local control that clears aircraft to take off and land, making sure that they are spaced safely apart. They also provide ground control, directing aircraft to taxi and notifying pilots of flight plans and potential safety concerns on that day before flight. Tower controllers are aided by some displays but mostly look outside from the towers and talk with pilots via radio. At larger airports staffed by FAA controllers, surface surveillance displays show controllers the aircraft and other vehicles on the ground on the airfield.

This FAA animation explains the three basic components of the U.S. air traffic control system.

Approach and en route controllers, on the other hand, sit in front of large displays in dark and quiet rooms. They communicate with pilots via radio. Their displays show aircraft locations on a map view with key features of the airspace boundaries and routes.

The 21 en route control centers in the U.S. manage traffic that is between and above airports and thus typically flying at higher speeds and altitudes.

Controllers at approach control facilities transition departing aircraft from local control after takeoff up and into en route airspace. They similarly take arriving aircraft from en route airspace, line them up with the landing approach and hand them off to tower controllers.

A controller at each display manages all the traffic within a sector. Sectors can vary in size from a few cubic miles, focused on sequencing aircraft landing at a busy airport, to en route sectors spanning more than 30,000 cubic miles (125,045 cubic km) where and when there are few aircraft flying. If a sector gets busy, a second and even third controller might assist, or the sector might be split into two, with another display and controller team managing the second.

How technology can help

Air traffic controllers have a stressful job and are subject to fatigue and information overload. Public concern about a growing number of close calls have put a spotlight on aging technology and staffing shortages that have led to air traffic controllers working mandatory overtime. New technologies can help alleviate those issues.

The air traffic control system is incorporating new technologies in several ways. The FAA’s NextGen air transportation system initiative is providing controllers with more – and more accurate – information.

Controllers’ displays originally showed only radar tracking. They now can tap into all the data known about each flight within the en route automation modernization system. This system integrates radar, automatic position reports from aircraft via automatic dependent surveillance-broadcast, weather reports, flight plans and flight histories.

Systems help alert controllers to potential conflicts between aircraft, or aircraft that are too close to high ground or structures, and provide suggestions to controllers to sequence aircraft into smooth traffic flows. In testimony to the U.S. Senate on Nov. 9, 2023, about airport safety, FAA Chief Operating Officer Timothy Arel said that the administration is developing or improving several air traffic control systems.

Researchers are using machine learning to analyze and predict aspects of air traffic and air traffic control, including air traffic flow between cities and air traffic controller behavior.

How technology can complicate matters

New technology can also cause profound changes to air traffic control in the form of new types of aircraft. For example, current regulations mostly limit uncrewed aircraft to fly lower than 400 feet (122 meters) above ground and away from airports. These are drones used by first responders, news organizations, surveyors, delivery services and hobbyists.

NASA and the FAA are leading the development of a traffic control system for drones and other uncrewed aircraft.

However, some emerging uncrewed aircraft companies are proposing to fly in controlled airspace. Some plan to have their aircraft fly regular flight routes and interact normally with air traffic controllers via voice radio. These include Reliable Robotics and Xwing, which are separately working to automate the Cessna Caravan, a small cargo airplane.

Others are targeting new business models, such as advanced air mobility, the concept of small, highly automated electric aircraft – electric air taxis, for example. These would require dramatically different routes and procedures for handling air traffic.

Expect the unexpected

An air traffic controller’s routine can be disrupted by an aircraft that requires special handling. This could range from an emergency to priority handling of medical flights or Air Force One. Controllers are given the responsibility and the flexibility to adapt how they manage their airspace.

The requirements for the front line of air traffic control are a poor match for AI’s capabilities. People expect air traffic to continue to be the safest complex, high-technology system ever. It achieves this standard by adhering to procedures when practical, which is something AI can do, and by adapting and exercising good judgment whenever something unplanned occurs or a new operation is implemented – a notable weakness of today’s AI.

Indeed, it is when conditions are the worst – when controllers figure out how to handle aircraft with severe problems, airport crises or widespread airspace closures due to security concerns or infrastructure failures – that controllers’ contributions to safety are the greatest.

Also, controllers don’t fly the aircraft. They communicate and interact with others to guide the aircraft, and so their responsibility is fundamentally to serve as part of a team – another notable weakness of AI.

As an engineer and designer, I’m most excited about the potential for AI to analyze the big data records of past air traffic operations in pursuit of, for example, more efficient routes of flight. However, as a pilot, I’m glad to hear a controller’s calm voice on the radio helping me land quickly and safely should I have a problem.The Conversation

About the Author:

Amy Pritchett, Professor of Aerospace Engineering, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Combining two types of molecular boron nitride could create a hybrid material used in faster, more powerful electronics

By Pulickel Ajayan, Rice University and Abhijit Biswas, Rice University 

In chemistry, structure is everything. Compounds with the same chemical formula can have different properties depending on the arrangement of the molecules they’re made of. And compounds with a different chemical formula but a similar molecular arrangement can have similar properties.

Graphene and a form of boron nitride called hexagonal boron nitride fall into the latter group. Graphene is made up of carbon atoms. Boron nitride, BN, is composed of boron and nitrogen atoms. While their chemical formulas differ, they have a similar structure – so similar that many chemists call hexagonal boron nitride “white graphene.”

Carbon-based graphene has lots of useful properties. It’s thin but strong, and it conducts heat and electricity very well, making it ideal for use in electronics.

Similarly, hexagonal boron nitride has a host of properties similar to graphene that could improve biomedical imaging and drug delivery, as well as computers, smartphones and LEDs. Researchers have studied this type of boron nitride for many years.

But, hexagonal boron nitride isn’t the only useful form this compound comes in.

As materials engineers, our research team has been investigating another type of boron nitride called cubic boron nitride. We want to know if combining the properties of hexagonal boron nitride with cubic boron nitride could open the door to even more useful applications.

Molecular structures of molecules, with atoms represented as blue spheres and bonds represented by gray lines connecting them. The left structure is in the shape of the cube, the right in flat sheets of hexagons.
Cubic boron nitride, shown on the left, and hexagonal boron nitride, shown on the right.
Oddball/Wikimedia Commons, CC BY-NC-SA

Hexagonal versus cubic

Hexagonal boron nitride is, as you might guess, boron nitride molecules arranged in the shape of a flat hexagon. It looks honeycomb-shaped, like graphene. Cubic boron nitride has a three-dimensional lattice structure and looks like a diamond at the molecular level.

H-BN is thin, soft and used in cosmetics to give them a silky texture. It doesn’t melt or degrade even under extreme heat, which also makes it useful in electronics and other applications. Some scientists predict it could be used to build a radiation shield for spacecraft.

C-BN is hard and resistant. It’s used in manufacturing to make cutting tools and drills, and it can keep its sharp edge even at high temperatures. It can also help dissipate heat in electronics.

Even though h-BN and c-BN might seem different, when put together, our research has found they hold even more potential than either on its own.

Two white powders, the top labeled 'hexagonal boron nitride' and the bottom labeled 'cubic boron nitride' with a circle between them labeled 'mixed phase boron nitride.' The bottom powder is slightly more brown and more clumpy.
The two forms of boron nitride have some similarities and some differences, but when combined, they can create a substance with a variety of scientific applications.
Abhijit Biswas

Both types of boron nitride conduct heat and can provide electrical insulation, but one, h-BN, is soft, and the other, c-BN, is hard. So, we wanted to see if they could be used together to create materials with interesting properties.

For example, combining their different behaviors could make a coating material effective for high temperature structural applications. C-BN could provide strong adhesion to a surface, while h-BN’s lubricating properties could resist wear and tear. Both together would keep the material from overheating.

Making boron nitride

This class of materials doesn’t occur naturally, so scientists must make it in the lab. In general, high-quality c-BN has been difficult to synthesize, whereas h-BN is relatively easier to make as high-quality films, using what are called vapor phase deposition methods.

In vapor phase deposition, we heat up boron and nitrogen-containing materials until they evaporate. The evaporated molecules then get deposited onto a surface, cool down, bond together and form a thin film of BN.

Our research team has worked on combining h-BN and c-BN using similar processes to vapor phase deposition, but we can also mix powders of the two together. The idea is to build a material with the right mix of h-BN and c-BN for thermal, mechanical and electronic properties that we can fine-tune.

Our team has found the composite substance made from combining both forms of BN together has a variety of potential applications. When you point a laser beam at the substance, it flashes brightly. Researchers could use this property to create display screens and improve radiation therapies in the medical field.

We’ve also found we can tailor how heat-conductive the composite material is. This means engineers could use this BN composite in machines that manage heat. The next step is trying to manufacture large plates made of a h-BN and c-BN composite. If done precisely, we can tailor the mechanical, thermal and optical properties to specific applications.

In electronics, h-BN could act as a dielectric – or insulator – alongside graphene in certain, low-power electronics. As a dielectric, h-BN would help electronics operate efficiently and keep their charge.

C-BN could work alongside diamond to create ultrawide band gap materials that allow electronic devices to work at a much higher power. Diamond and c-BN both conduct heat well, and together they could help cool down these high-power devices, which generate lots of extra heat.

H-BN and c-BN separately could lead to electronics that perform exceptionally well in different contexts – together, they have a host of potential applications, as well.

Our BN composite could improve heat spreaders and insulators, and it could work in energy storage machines like supercapacitors, which are fast-charging energy storage devices, and rechargeable batteries.

We’ll continue studying BN’s properties, and how we can use it in lubricants, coatings and wear-resistant surfaces. Developing ways to scale up production will be key for exploring its applications, from materials science to electronics and even environmental science.The Conversation

About the Author:

Pulickel Ajayan, Professor of Materials Science and NanoEngineering, Rice University and Abhijit Biswas, Research Scientist in Materials Science and Nanoengineering, Rice University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

From besting Tetris AI to epic speedruns – inside gaming’s most thrilling feats

By James Dawes, Macalester College 

After 13-year-old Willis Gibson became the first human to beat the original Nintendo version of Tetris, he dedicated his special win to his father, who passed away in December 2023.

The Oklahoma teen beat the game by defeating level after level until he reached the “kill screen” – that is, the moment when the Tetris artificial intelligence taps out in exhaustion, stopping play because its designers never wrote the code to advance further. Before Gibson, the only other player to overcome the game’s AI was another AI.

For any parent who has despaired over their children sinking countless hours into video games, Gibson’s victory over the cruel geometry of Tetris stands as a bracing corrective.

Despite the stereotypes, most gamers are anything but lazy. And they’re anything but mindless.

The world’s top players can sometimes serve as reminders of the best in us, with memorable achievements that range from the heroic to the inscrutably weird.

The perfect run

Speedrunning” is a popular gaming subculture in which players meticulously optimize routes and exploit glitches to complete, in a matter of minutes, games that normally take hours, from the tightly constrained, run-and-gun action game Cuphead to the sprawling role-playing epic Baldur’s Gate 3.

In top-level competition, speedrunners strive to match the time of what’s referred to as a “TAS,” or “tool-assisted speed run.” To figure out the TAS time, players use game emulators to choreograph a theoretically perfect playthrough, advancing the game one frame at a time to determine the fastest possible time.

Success requires punishing precision, flawless execution and years of training.

The major speedrunning milestones are, like Olympic races, marked by mere fractions of a second. The urge to speedrun likely sprouts from an innate human longing for perfection – and a uniquely 21st century compulsion to best the robots.

A Twitch streamer who goes by the username Niftski is currently the human who has come closest to achieving this androidlike perfection. His 4-minute, 54.631-second world-record speedrun of Super Mario Bros. – achieved in September 2023 – is just 0.35 seconds shy of a flawless TAS.

Watching Niftski’s now-famous run is a dissonant experience. Goofy, retro, 8-bit Mario jumps imperturbably over goombas and koopa troopas with the iconic, cheerful “boink” sound of his hop.

Meanwhile, Niftski pants as his anxiety builds, his heart rate – tracked on screen during the livestream – peaking at 188 beats per minute.

When Mario bounces over the final big turtle at the finish line – “boink” – Niftski erupts into screams of shock and repeated cries of “Oh my God!”

He hyperventilates, struggles for oxygen and finally sobs from exhaustion and joy.

Twitch streamer Niftski’s record speedrun of Super Mario Bros. missed perfection by 0.35 seconds.

The largest world and its longest pig ride

This list couldn’t be complete without an achievement from Minecraft, the revolutionary video game that has become the second-best-selling title in history, with over 300 million copies sold – second only to Tetris’ 520 million units.

Minecraft populates the video game libraries of grade-schoolers and has been used as an educational tool in university classrooms. Even the British Museum has held an exhibition devoted to the game.

Minecraft is known as a sandbox game, which means that gamers can create and explore their own virtual worlds, limited only by their imagination and a few simple tools and resources – like buckets and sand, or, in the case of Minecraft, pickaxes and stone.

So what can you do in the Minecraft playground?

Well, you can ride on a pig. The Guinness Book of World Records marks the farthest distance at 414 miles. Or you can collect sunflowers. The world record for that is 89 in one minute. Or you can dig a tunnel – but you’ll need to make it 100,001 blocks long to edge out the current record.

My personal favorite is a collective, ongoing effort: a sprawling, global collaboration to recreate the world on a 1:1 scale using Minecraft blocks, with blocks counting as one cubic meter.

At their best, sandbox games like Minecraft can bring people closer to the joyful and healthily pointless play of childhood – a restorative escape from the anxious, utility-driven planning that dominates so much of adulthood.

Popular YouTuber MrBeast contributes to ‘Build the Earth’ by constructing a Minecraft replica of Raleigh, N.C.

The galaxy’s greatest collaboration

The Halo 3 gaming community participated in a bloodier version of the collective effort of Minecraft players.

The game, which pits humans against an alien alliance known as the Covenant, was released in 2007 to much fanfare.

Whether they were playing the single-player campaign mode or the online multiplayer mode, gamers around the world started seeing themselves as imaginary participants in a global cause to save humanity – in what came to be known as the “Great War.”

They organized round-the-clock campaign shifts, while sharing strategies in nearly 6,000 Halo wiki articles and 21 million online discussion posts.

Halo developer Bungie started tracking total alien deaths by all players, with the 10 billion milestone reached in April 2009.

Game designer Jane McGonigal recalls with awe the community effort that went into that Great War, citing it as a transcendent example of the fundamental human desire to work together and to become a part of something bigger than the self.

Bungie maintained a collective history of the Great War in the form of “personal service records” that memorialized each player’s contributions – medals, battle statistics, campaign maps and more.

The archive beggars comprehension: According to Bungie, its servers handled 1.4 petabytes of data requests by players in one nine-month stretch. McGonigal notes, by way of comparison, that everything ever written by humans in all of recorded history amounts to 50 petabytes of data.

Gamification versus gameful design

If you’re mystified by the behavior of these gamers, you’re not alone.

Over the past decade, researchers across a range of fields have marveled at the dedication of gamers like Gibson and Niftski, who commit themselves without complaint to what some might see as punishing, pointless and physically grueling labor.

How could this level of dedication be applied to more “productive” endeavors, they wondered, like education, taxes or exercise?

From this research, an industry centered on the “gamification” of work, life and learning emerged. It giddily promised to change people’s behaviors through the use of extrinsic motivators borrowed from the gaming community: badges, achievements, community scorekeeping.

The concept caught fire, spreading everywhere from early childhood education to the fast-food industry.

Many game designers have reacted to this trend like Robert Oppenheimer at the close of the eponymous movie – aghast that their beautiful work was used, for instance, to pressure Disneyland Resort laborers to load laundry and press linens at anxiously hectic speeds.

Arguing that the gamification trend misses entirely the magic of gaming, game designers have instead started promoting the concept of “gameful design.” Where gamification focuses on useful outcomes, gameful design focuses on fulfilling experiences.

Gameful design prioritizes intrinsic motivation over extrinsic incentives. It embraces design elements that promote social connection, creativity, a sense of autonomy – and, ultimately, the sheer joy of mastery.

When I think of Niftski’s meltdown after his record speedrun – and Gibson’s, who also began hyperventilating in shock and almost passed out – I think of my own children.

I wish for them such moments of ecstatic, prideful accomplishment in a world that sometimes seems starved of joy.The Conversation

About the Author:

James Dawes, Professor of English, Macalester College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

AI is here – and everywhere: 3 AI researchers look to the challenges ahead in 2024

By Anjana Susarla, Michigan State University; Casey Fiesler, University of Colorado Boulder, and Kentaro Toyama, University of Michigan 

2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.

We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.


Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder

2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.

One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so – in ways that often do more harm than good.

However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations – and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.

So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.

I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.

Many of the challenges in the year ahead have to do with problems of AI that society is already facing.

Kentaro Toyama, Professor of Community Information, University of Michigan

In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence – not quite here yet – it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.

Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.

The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning – what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcus suggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.

Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire – comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.

Speaking of problems, the very people sounding the loudest alarms about AI – like Elon Musk and Sam Altman – can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 – though it seems slow in coming – is stronger AI regulation, at national and international levels.


Anjana Susarla, Professor of Information Systems, Michigan State University

In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.

Companies are racing to develop LLMs that can be deployed on a variety of hardware and in a variety of applications, including running an LLM on your smartphone. The emergence of these lightweight LLMs and open source LLMs could usher in a world of autonomous AI agents – a world that society is not necessarily prepared for.

These advanced AI capabilities offer immense transformative power in applications ranging from business to precision medicine. My chief concern is that such advanced capabilities will pose new challenges for distinguishing between human-generated content and AI-generated content, as well as pose new types of algorithmic harms.

The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.

The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.

A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.The Conversation

About the Authors:

Anjana Susarla, Professor of Information Systems, Michigan State University; Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder, and Kentaro Toyama, Professor of Community Information, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers

By Daniel Lidar, University of Southern California 

Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking state-of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum systems. I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, including significant advancements in quantum cryptography and quantum sensing.

The source of quantum computing’s power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves – like sound waves or ocean waves – combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement’s collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

The ones and zeros – and everything in between – of quantum computing.

Applications of quantum computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that today’s encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long process, the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that organizations around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in areas such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging and surveillance.

Initiatives such as the development of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks – including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage – in particular in machine learning – remains a critical area of ongoing research.

a metal apparatus with green laser light in the background
A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves.
Guoqing Wang, CC BY-NC-ND

Staying coherent and overcoming errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology’s transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Quantum advantage coming into view

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of having achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright students of the type I encounter almost every day, ensures that the field will continue to progress.The Conversation

About the Author:

Daniel Lidar, Professor of Electrical Engineering, Chemistry, and Physics & Astronomy, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Amazon’s AI move – why you need AI investments as race speeds up

By George Prior

Amazon’s $4bn investment into a ChatGPT rival reinforces why almost all investors should have some artificial intelligence (AI) exposure in their investment mix, says the CEO of one of the world’s largest independent financial advisory, asset management and fintech organizations.

The comments from Nigel Green of deVere Group comes as e-commerce giant Amazon said on Monday it will invest $4 billion in Anthropic and take a minority ownership position.  Anthropic was founded by former OpenAI (the company behind ChatGPT) executives, and recently debuted its new AI chatbot named Claude 2.

He says: “This move highlights how the big tech titan is stepping up its rivalry with other giants Microsoft, Google and Nvidia in the AI space.

“The AI Race is on, with the big tech firms racing to lead in the development, deployment, and utilisation of artificial intelligence technologies.

“AI is going to reshape whole industries and fuel innovation – and this makes it crucial for investors to pay attention and why almost all investors need exposure to AI investments in their portfolios.”

While it seems that the AI hype is everywhere now, we are still very early in the AI era.  Investors, says the deVere CEO, should act now to have the ‘early advantage’.

“Getting in early allows investors to establish a competitive advantage over latecomers. They can secure favourable entry points and lower purchase prices, maximizing their potential profits.

“This tech has the potential to disrupt existing industries or create entirely new ones. Early investors are likely to benefit from the exponential growth that often accompanies the adoption of such technologies. As these innovations gain traction, their valuations could skyrocket, resulting in significant returns on investment,” he notes.

While AI is The Big Story currently, investors should, as always, remain diversified across asset classes, sectors and regions in order to maximise returns per unit of risk (volatility) incurred.

Diversification remains investors’ best tool for long-term financial success. As a strategy it has been proven to reduce risk, smooth-out volatility, exploit differing market conditions, maximise long-term returns and protect against unforeseen external events.

Of the latest Amazon investment, Nigel Green concludes: “AI is not just another technology trend; it is a game-changer. Investors need to pay attention and include it as part of their mix.”

About:

deVere Group is one of the world’s largest independent advisors of specialist global financial solutions to international, local mass affluent, and high-net-worth clients.  It has a network of offices across the world, over 80,000 clients and $12bn under advisement.

AI and new standards promise to make scientific data more useful by making it reusable and accessible

By Bradley Wade Bishop, University of Tennessee 

Every time a scientist runs an experiment, or a social scientist does a survey, or a humanities scholar analyzes a text, they generate data. Science runs on data – without it, we wouldn’t have the James Webb Space Telescope’s stunning images, disease-preventing vaccines or an evolutionary tree that traces the lineages of all life.

This scholarship generates an unimaginable amount of data – so how do researchers keep track of it? And how do they make sure that it’s accessible for use by both humans and machines?

To improve and advance science, scientists need to be able to reproduce others’ data or combine data from multiple sources to learn something new.

Accessible and usable data can help scientists reproduce prior results. Doing so is an important part of the scientific process, as this TED-Ed video explains.

Any kind of sharing requires management. If your neighbor needs to borrow a tool or an ingredient, you have to know whether you have it and where you keep it. Research data might be on a graduate student’s laptop, buried in a professor’s USB collection or saved more permanently within an online data repository.

I’m an information scientist who studies other scientists. More precisely, I study how scientists think about research data and the ways that they interact with their own data and data from others. I also teach students how to manage their own or others’ data in ways that advance knowledge.

Research data management

Research data management is an area of scholarship that focuses on data discovery and reuse. As a field, it encompasses research data services, resources and cyberinfrastructure. For example, one type of infrastructure, the data repository, gives researchers a place to deposit their data for long-term storage so that others can find it. In short, research data management encompasses the data’s life cycle from cradle to grave to reincarnation in the next study.

Proper research data management also allows scientists to use the data already out there rather than recollecting data that already exists, which saves time and resources.

With increasing science politicization, many national and international science organizations have upped their standards for accountability and transparency. Federal agencies and other major research funders like the National Institutes of Health now prioritize research data management and require researchers to have a data management plan before they can receive any funds.

Scientists and data managers can work together to redesign the systems scientists use to make data discovery and preservation easier. In particular, integrating AI can make this data more accessible and reusable.

Artificially intelligent data management

Many of these new standards for research data management also stem from an increased use of AI, including machine learning, across data-driven fields. AI makes it highly desirable for any data to be machine-actionable – that is, usable by machines without human intervention. Now, scholars can consider machines not only as tools but also as potential autonomous data reusers and collaborators.

The key to machine-actionable data is metadata. Metadata are the descriptions scientists set for their data and may include elements such as creator, date, coverage and subject. Minimal metadata is minimally useful, but correct and complete standardized metadata makes data more useful for both people and machines.

It takes a cadre of research data managers and librarians to make machine-actionable data a reality. These information professionals work to facilitate communication between scientists and systems by ensuring the quality, completeness and consistency of shared data.

The FAIR data principles, created by a group of researchers called FORCE11 in 2016 and used across the world, provide guidance on how to enable data reuse by machines and humans. FAIR data is findable, accessible, interoperable and reusable – meaning it has robust and complete metadata.

In the past, I’ve studied how scientists discover and reuse data. I found that scientists tend to use mental shortcuts when they’re looking for data – for example, they may go back to familiar and trusted sources or search for certain key terms they’ve used before. Ideally, my team could build this decision-making process of experts and remove as many biases as possible to improve AI. The automation of these mental shortcuts should reduce the time-consuming chore of locating the right data.

Data management plans

But there’s still one piece of research data management that AI can’t take over. Data management plans describe the what, where, when, why and who of managing research data. Scientists fill them out, and they outline the roles and activities for managing research data during and long after research ends. They answer questions like, “Who is responsible for long-term preservation,” “Where will the data live,” “How do I keep my data secure,” and “Who pays for all of that?”

Grant proposals for nearly all funding agencies across countries now require data management plans. These plans signal to scientists that their data is valuable and important enough to the community to share. Also, the plans help funding agencies keep tabs on the research and investigate any potential misconduct. But most importantly, they help scientists make sure their data stays accessible for many years.

Making all research data as FAIR and open as possible will improve the scientific process. And having access to more data opens up the possibility for more informed discussions on how to promote economic development, improve the stewardship of natural resources, enhance public health, and how to responsibly and ethically develop technologies that will improve lives. All intelligence, artificial or otherwise, will benefit from better organization, access and use of research data.The Conversation

About the Author:

Bradley Wade Bishop, Professor of Information Sciences, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

US agencies buy vast quantities of personal information on the open market – a legal scholar explains why and what it means for privacy in the age of AI

By Anne Toomey McKenna, University of Richmond 

Numerous government agencies, including the FBI, Department of Defense, National Security Agency, Treasury Department, Defense Intelligence Agency, Navy and Coast Guard, have purchased vast amounts of U.S. citizens’ personal information from commercial data brokers. The revelation was published in a partially declassified, internal Office of the Director of National Intelligence report released on June 9, 2023.

The report shows the breathtaking scale and invasive nature of the consumer data market and how that market directly enables wholesale surveillance of people. The data includes not only where you’ve been and who you’re connected to, but the nature of your beliefs and predictions about what you might do in the future. The report underscores the grave risks the purchase of this data poses, and urges the intelligence community to adopt internal guidelines to address these problems.

As a privacy, electronic surveillance and technology law attorney, researcher and law professor, I have spent years researching, writing and advising about the legal issues the report highlights.

These issues are increasingly urgent. Today’s commercially available information, coupled with the now-ubiquitous decision-making artificial intelligence and generative AI like ChatGPT, significantly increases the threat to privacy and civil liberties by giving the government access to sensitive personal information beyond even what it could collect through court-authorized surveillance.

What is commercially available information?

The drafters of the report take the position that commercially available information is a subset of publicly available information. The distinction between the two is significant from a legal perspective. Publicly available information is information that is already in the public domain. You could find it by doing a little online searching.

Commercially available information is different. It is personal information collected from a dizzying array of sources by commercial data brokers that aggregate and analyze it, then make it available for purchase by others, including governments. Some of that information is private, confidential or otherwise legally protected.

A chart with four columns and three rows
The commercial data market collects and packages vast amounts of data and sells it for various commercial, private and government uses.
Government Accounting Office

The sources and types of data for commercially available information are mind-bogglingly vast. They include public records and other publicly available information. But far more information comes from the nearly ubiquitous internet-connected devices in people’s lives, like cellphones, smart home systems, cars and fitness trackers. These all harness data from sophisticated, embedded sensors, cameras and microphones. Sources also include data from apps, online activity, texts and emails, and even health care provider websites.

Types of data include location, gender and sexual orientation, religious and political views and affiliations, weight and blood pressure, speech patterns, emotional states, behavioral information about myriad activities, shopping patterns and family and friends.

This data provides companies and governments a window into the “Internet of Behaviors,” a combination of data collection and analysis aimed at understanding and predicting people’s behavior. It pulls together a wide range of data, including location and activities, and uses scientific and technological approaches, including psychology and machine learning, to analyze that data. The Internet of Behaviors provides a map of what each person has done, is doing and is expected to do, and provides a means to influence a person’s behavior.

Smart homes could be good for your wallet and good for the environment, but really bad for your privacy.

Better, cheaper and unrestricted

The rich depths of commercially available information, analyzed with powerful AI, provide unprecedented power, intelligence and investigative insights. The information is a cost-effective way to surveil virtually everyone, plus it provides far more sophisticated data than traditional electronic surveillance tools or methods like wiretapping and location tracking.

Government use of electronic surveillance tools is extensively regulated by federal and state laws. The U.S. Supreme Court has ruled that the Constitution’s Fourth Amendment, which prohibits unreasonable searches and seizures, requires a warrant for a wide range of digital searches. These include wiretapping or intercepting a person’s calls, texts or emails; using GPS or cellular location information to track a person; or searching a person’s cellphone.

Complying with these laws takes time and money, plus electronic surveillance law restricts what, when and how data can be collected. Commercially available information is cheaper to obtain, provides far richer data and analysis, and is subject to little oversight or restriction compared to when the same data is collected directly by the government.

The threats

Technology and the burgeoning volume of commercially available information allow various forms of the information to be combined and analyzed in new ways to understand all aspects of your life, including preferences and desires.

How the collection, aggregation and sale of your data violates your privacy.

The Office of the Director of National Intelligence report warns that the increasing volume and widespread availability of commercially available information poses “significant threats to privacy and civil liberties.” It increases the power of the government to surveil its citizens outside the bounds of law, and it opens the door to the government using that data in potentially unlawful ways. This could include using location data obtained via commercially available information rather than a warrant to investigate and prosecute someone for abortion.

The report also captures both how widespread government purchases of commercially available information are and how haphazard government practices around the use of the information are. The purchases are so pervasive and agencies’ practices so poorly documented that the Office of the Director of National Intelligence cannot even fully determine how much and what types of information agencies are purchasing, and what the various agencies are doing with the data.

Is it legal?

The question of whether it’s legal for government agencies to purchase commercially available information is complicated by the array of sources and complex mix of data it contains.

There is no legal prohibition on the government collecting information already disclosed to the public or otherwise publicly available. But the nonpublic information listed in the declassified report includes data that U.S. law typically protects. The nonpublic information’s mix of private, sensitive, confidential or otherwise lawfully protected data makes collection a legal gray area.

Despite decades of increasingly sophisticated and invasive commercial data aggregation, Congress has not passed a federal data privacy law. The lack of federal regulation around data creates a loophole for government agencies to evade electronic surveillance law. It also allows agencies to amass enormous databases that AI systems learn from and use in often unrestricted ways. The resulting erosion of privacy has been a concern for more than a decade.

Throttling the data pipeline

The Office of the Director of National Intelligence report acknowledges the stunning loophole that commercially available information provides for government surveillance: “The government would never have been permitted to compel billions of people to carry location tracking devices on their persons at all times, to log and track most of their social interactions, or to keep flawless records of all their reading habits. Yet smartphones, connected cars, web tracking technologies, the Internet of Things, and other innovations have had this effect without government participation.”

However, it isn’t entirely correct to say “without government participation.” The legislative branch could have prevented this situation by enacting data privacy laws, more tightly regulating commercial data practices, and providing oversight in AI development. Congress could yet address the problem. Representative Ted Lieu has introduced the a bipartisan proposal for a National AI Commission, and Senator Chuck Schumer has proposed an AI regulation framework.

Effective data privacy laws would keep your personal information safer from government agencies and corporations, and responsible AI regulation would block them from manipulating you.The Conversation

About the Author:

Anne Toomey McKenna, Visiting Professor of Law, University of Richmond

This article is republished from The Conversation under a Creative Commons license. Read the original article.