Archive for Programming – Page 10

The Colonial Pipeline ransomware attack and the SolarWinds hack were all but inevitable – why national cyber defense is a ‘wicked’ problem

By Terry Thompson, Johns Hopkins University 

Takeaways:

· There are no easy solutions to shoring up U.S. national cyber defenses.

· Software supply chains and private sector infrastructure companies are vulnerable to hackers.

· Many U.S. companies outsource software development because of a talent shortage, and some of that outsourcing goes to companies in Eastern Europe that are vulnerable to Russian operatives.

· U.S. national cyber defense is split between the Department of Defense and the Department of Homeland Security, which leaves gaps in authority.

The ransomware attack on Colonial Pipeline on May 7, 2021, exemplifies the huge challenges the U.S. faces in shoring up its cyber defenses. The private company, which controls a significant component of the U.S. energy infrastructure and supplies nearly half of the East Coast’s liquid fuels, was vulnerable to an all-too-common type of cyber attack. The FBI has attributed the attack to a Russian cybercrime gang. It would be difficult for the government to mandate better security at private companies, and the government is unable to provide that security for the private sector.

Similarly, the SolarWinds hack, one of the most devastating cyber attacks in history, which came to light in December 2020, exposed vulnerabilities in global software supply chains that affect government and private sector computer systems. It was a major breach of national security that revealed gaps in U.S. cyber defenses.

These gaps include inadequate security by a major software producer, fragmented authority for government support to the private sector, blurred lines between organized crime and international espionage, and a national shortfall in software and cybersecurity skills. None of these gaps is easily bridged, but the scope and impact of the SolarWinds attack show how critical controlling these gaps is to U.S. national security.

The SolarWinds breach, likely carried out by a group affiliated with Russia’s FSB security service, compromised the software development supply chain used by SolarWinds to update 18,000 users of its Orion network management product. SolarWinds sells software that organizations use to manage their computer networks. The hack, which allegedly began in early 2020, was discovered only in December when cybersecurity company FireEye revealed that it had been hit by the malware. More worrisome, this may have been part of a broader attack on government and commercial targets in the U.S.

The Biden administration is preparing an executive order that is expected to address these software supply chain vulnerabilities. However, these changes, as important as they are, would probably not have prevented the SolarWinds attack. And preventing ransomware attacks like the Colonial Pipeline attack would require U.S. intelligence and law enforcement to infiltrate every organized cyber criminal group in Eastern Europe.

Supply chains, sloppy security and a talent shortage

The vulnerability of the software supply chain – the collections of software components and software development services companies use to build software products – is a well-known problem in the security field. In response to a 2017 executive order, a report by a Department of Defense-led interagency task force identified “a surprising level of foreign dependence,” workforce challenges and critical capabilities such as printed circuit board manufacturing that companies are moving offshore in pursuit of competitive pricing. All these factors came into play in the SolarWinds attack.

SolarWinds, driven by its growth strategy and plans to spin off its managed service provider business in 2021, bears much of the responsibility for the damage, according to cybersecurity experts. I believe that the company put itself at risk by outsourcing its software development to Eastern Europe, including a company in Belarus. Russian operatives have been known to use companies in former Soviet satellite countries to insert malware into software supply chains. Russia used this technique in the 2017 NotPetya attack that cost global companies more than US$10 billion.

Software supply chain attacks explained.

SolarWinds also failed to practice basic cybersecurity hygiene, according to a cybersecurity researcher.

Vinoth Kumar reported that the password for the software company’s development server was allegedly “solarwinds123,” an egregious violation of fundamental standards of cybersecurity. SolarWinds’ sloppy password management is ironic in light of the Password Management Solution of the Year award the company received in 2019 for its Passportal product.

In a blog post, the company admitted that “the attackers were able to circumvent threat detection techniques employed by both SolarWinds, other private companies, and the federal government.”

The larger question is why SolarWinds, an American company, had to turn to foreign providers for software development. A Department of Defense report about supply chains characterizes the lack of software engineers as a crisis, partly because the education pipeline is not providing enough software engineers to meet demand in the commercial and defense sectors.

There’s also a shortage of cybersecurity talent in the U.S. Engineers, software developers and network engineers are among the most needed skills across the U.S., and the lack of software engineers who focus on the security of software in particular is acute.

Fragmented authority

Though I’d argue SolarWinds has much to answer for, it should not have had to defend itself against a state-orchestrated cyber attack on its own. The 2018 National Cyber Strategy describes how supply chain security should work. The government determines the security of federal contractors like SolarWinds by reviewing their risk management strategies, ensuring that they are informed of threats and vulnerabilities and responding to incidents on their systems.

However, this official strategy split these responsibilities between the Pentagon for defense and intelligence systems and the Department of Homeland Security for civil agencies, continuing a fragmented approach to information security that began in the Reagan era. Execution of the strategy relies on the DOD’s U.S. Cyber Command and DHS’s Cyber and Infrastructure Security Agency. DOD’s strategy is to “defend forward”: that is, to disrupt malicious cyber activity at its source, which proved effective in the runup to the 2018 midterm elections. The Cyber and Infrastructure Security Agency, established in 2018, is responsible for providing information about threats to critical infrastructure sectors.

Neither agency appears to have sounded a warning or attempted to mitigate the attack on SolarWinds. The government’s response came only after the attack. The Cyber and Infrastructure Security Agency issued alerts and guidance, and a Cyber Unified Coordination Group was formed to facilitate coordination among federal agencies.

These tactical actions, while useful, were only a partial solution to the larger, strategic problem. The fragmentation of the authorities for national cyber defense evident in the SolarWinds hack is a strategic weakness that complicates cybersecurity for the government and private sector and invites more attacks on the software supply chain.

A wicked problem

National cyber defense is an example of a “wicked problem,” a policy problem that has no clear solution or measure of success. The Cyberspace Solarium Commission identified many inadequacies of U.S. national cyber defenses. In its 2020 report, the commission noted that “There is still not a clear unity of effort or theory of victory driving the federal government’s approach to protecting and securing cyberspace.”

Many of the factors that make developing a centralized national cyber defense challenging lie outside of the government’s direct control. For example, economic forces push technology companies to get their products to market quickly, which can lead them to take shortcuts that undermine security. Legislation along the lines of the Gramm-Leach-Bliley Act passed in 1999 could help deal with the need for speed in software development. The law placed security requirements on financial institutions. But software development companies are likely to push back against additional regulation and oversight.

The Biden administration appears to be taking the challenge seriously. The president has appointed a national cybersecurity director to coordinate related government efforts. It remains to be seen whether and how the administration will address the problem of fragmented authorities and clarify how the government will protect companies that supply critical digital infrastructure. It’s unreasonable to expect any U.S. company to be able to fend for itself against a foreign nation’s cyberattack.

Steps forward

In the meantime, software developers can apply the secure software development approach advocated by the National Institute of Standards and Technology. Government and industry can prioritize the development of artificial intelligence that can identify malware in existing systems. All this takes time, however, and hackers move quickly.

Finally, companies need to aggressively assess their vulnerabilities, particularly by engaging in more “red teaming” activities: that is, having employees, contractors or both play the role of hackers and attack the company.

Recognizing that hackers in the service of foreign adversaries are dedicated, thorough and not constrained by any rules is important for anticipating their next moves and reinforcing and improving U.S. national cyber defenses. Otherwise, Colonial Pipeline is unlikely to be the last victim of a major attack on U.S. infrastructure and SolarWinds is unlikely to be the last victim of a major attack on the U.S. software supply chain.

This is an updated version of an article originally published on February 9, 2021.

About the Author:

Terry Thompson, Adjunct Instructor in Cybersecurity, Johns Hopkins University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Embrace the unexpected: To teach AI how to handle new situations, change the rules of the game

By Mayank Kejriwal, University of Southern California 

– My colleagues and I changed a digital version of Monopoly so that instead of getting US$200 each time a player passes Go, the player is charged a wealth tax. We didn’t do this to gain an advantage or trick anyone. The purpose is to throw a curveball at artificial intelligence agents that play the game.

Our aim is to help the agents learn to handle unexpected events, something AIs to date have been decidedly bad at. Giving AIs this kind of adaptability is important for futuristic systems like surgical robots, but also algorithms in the here and now that decide who should get bail, who should get approved for a credit card and whose resume gets through to a hiring manager. Not dealing well with the unexpected in any of those situations can have disastrous consequences.

AI agents need the ability to detect, characterize and adapt to novelty in human-like ways. A situation is novel if it challenges, directly or indirectly, an agent’s model of the external world, which includes other agents, the environment and their interactions.

While most people do not deal with novelty in the most perfect way possible, they are able to to learn from their mistakes and adapt. Faced with a wealth tax in Monopoly, a human player might realize that she should have cash handy for the IRS as she is approaching Go. An AI player, bent on aggressively acquiring properties and monopolies, may fail to realize the appropriate balance between cash and nonliquid assets until it’s too late.

Adapting to novelty in open worlds

Reinforcement learning is the field that is largely responsible for “superhuman” game-playing AI agents and applications like self-driving cars. Reinforcement learning uses rewards and punishment to allow AI agents to learn by trial and error. It is part of the larger AI field of machine learning.

The learning in machine learning implies that such systems are already capable of dealing with limited types of novelty. Machine learning systems tend to do well on input data that are statistically similar, although not identical, to those on which they were originally trained. In practice, it is OK to violate this condition as long as nothing too unexpected is likely to happen.

Such systems can run into trouble in an open world. As the name suggests, open worlds cannot be completely and explicitly defined. The unexpected can, and does, happen. Most importantly, the real world is an open world.

However, the “superhuman” AIs are not designed to handle highly unexpected situations in an open world. One reason may be the use of modern reinforcement learning itself, which eventually leads the AI to be optimized for the specific environment in which it was trained. In real life, there are no such guarantees. An AI that is built for real life must be able to adapt to novelty in an open world.

Novelty as a first-class citizen

Returning to Monopoly, imagine that certain properties are subject to rent protection. A good player, human or AI, would recognize the properties as bad investments compared to properties that can earn higher rents and not purchase them. However, an AI that has never before seen this situation, or anything like it, will likely need to play many games before it can adapt.

Before computer scientists can even start theorizing about how to build such “novelty-adaptive” agents, they need a rigorous method for evaluating them. Traditionally, most AI systems are tested by the same people who build them. Competitions are more impartial, but to date, no competition has evaluated AI systems in situations so unexpected that not even the system designers could have foreseen them. Such an evaluation is the gold standard for testing AI on novelty, similar to randomized controlled trials for evaluating drugs.

In 2019, the U.S. Defense Advanced Research Projects Agency launched a program called Science of Artificial Intelligence and Learning for Open-world Novelty, called SAIL-ON for short. It is currently funding many groups, including my own at the University of Southern California, for researching novelty adaptation in open worlds.

One of the many ways in which the program is innovative is that a team can either develop an AI agent that handles novelty, or design an open-world environment for evaluating such agents, but not both. Teams that build an open-world environment must also theorize about novelty in that environment. They test their theories and evaluate the agents built by another group by developing a novelty generator. These generators can be used to inject unexpected elements into the environment.

Under SAIL-ON, my colleagues and I recently developed a simulator called Generating Novelty in Open-world Multi-agent Environments, or GNOME. GNOME is designed to test AI novelty adaptation in strategic board games that capture elements of the real world.

Diagram of a Monopoly game with symbols indicating players, houses and hotels
The Monopoly version of the author’s AI novelty environment can trip up AI’s that play the game by introducing a wealth tax, rent control and other unexpected factors.
Mayank Kejriwal, CC BY-ND

Our first version of GNOME uses the classic board game Monopoly. We recently demonstrated the Monopoly-based GNOME at a top machine learning conference. We allowed participants to inject novelties and see for themselves how preprogrammed AI agents performed. For example, GNOME can introduce the wealth tax or rent protection “novelties” mentioned earlier, and evaluate the AI following the change.

By comparing how the AI performed before and after the rule change, GNOME can quantify just how far off its game the novelty knocked the AI. If GNOME finds that the AI was winning 80% of the games before the novelty was introduced, and is now winning only 25% of the games, it will flag the AI as one that has lots of room to improve.

The future: A science of novelty?

GNOME has already been used to evaluate novelty-adaptive AI agents built by three independent organizations also funded under this DARPA program. We have also built GNOMEs based on poker, and “war games” that are similar to Battleship. In the next year, we will also be exploring GNOMEs for other strategic board games like Risk and Catan. This research is expected to lead to AI agents that are capable of handling novelty in different settings.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Making novelty a central focus of modern AI research and evaluation has had the byproduct of producing an initial body of work in support of a science of novelty. Not only are researchers like ourselves exploring definitions and theories of novelty, but we are exploring questions that could have fundamental implications. For example, our team is exploring the question of when a novelty is expected to be impossibly difficult for an AI. In the real world, if such a situation arises, the AI would recognize it and call a human operator.

In seeking answers to these and other questions, computer scientists are now trying to enable AIs that can react properly to the unexpected, including black-swan events like COVID-19. Perhaps the day is not far off when an AI will be able to not only beat humans at their existing games, but adapt quickly to any version of those games that humans can imagine. It may even be capable of adapting to situations that we cannot conceive of today.The Conversation

About the Author:

Mayank Kejriwal, Research Assistant Professor of Computer Science, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.