Washington, DC - The National Science Foundation (NSF) Secure and Trustworthy Cyberspace (SaTC) program announces new support for a diverse, $78.2 million portfolio of more than 225 new projects in 32 states spanning a broad range of research and education topics, including artificial intelligence, cryptography, network security, privacy, and usability.

The new portfolio is headlined by an award for the Center for Trustworthy Machine Learning (CTML), which will address grand challenges in cybersecurity science and engineering that have the potential for broad economic and societal impacts. CTML is a Frontier project, a large-scale, multi-institution effort with work that crosses disciplines.

"NSF's investments in SaTC are advancing knowledge to protect cyber systems from malicious behavior, while preserving privacy and promoting usability," said Jim Kurose, NSF assistant director for Computer and Information Science and Engineering. "Our goal is to identify fundamentally new ways to design, build, and operate secure cyber systems at both the systems and application levels, protect critical infrastructure, and motivate and educate individuals about security and privacy."

Trustworthiness of Artificial Intelligence-Based Systems

Recent advances in machine learning have vastly improved the capabilities of computational reasoning in various domains, exceeding human-level performance in many tasks.

Despite these advances, significant vulnerabilities remain. Image recognition systems can be easily deceived, malware detection models can be evaded, and models meant to catch problems can be left vulnerable if they are attacked and manipulated while they're being "trained." The new Frontier CTML will work to develop an arsenal of defensive techniques for building future systems in a safer, more secure manner.

"This Frontier project will develop an understanding of vulnerabilities in today's machine learning approaches, along with methods for mitigating against these vulnerabilities to strengthen future machine learning-based technologies and solutions," Kurose said.

The $10 million, five-year CTML award will allow the center to focus on three interconnected and parallel thrusts of machine learning:

  • Investigating methods to defend a trained model from adversarial inputs.
  • Exploring rigorously grounded measures of model and training data robustness.
  • Identifying ways adversaries may abuse generative machine learning models and developing countermeasures for defending against such attacks.

"Machine learning is fundamentally changing the way we live and work--from autonomous vehicles, digital assistants, to robotic manufacturing--we see computers doing complex reasoning in ways that would be considered science fiction just a decade ago," said Patrick McDaniel, lead principal investigator and William L. Weiss Professor of Information and Communications Technology in the School of Electrical Engineering and Computer Science at Penn State University. "What we have found is that the algorithms and processing driving this new technology are vulnerable to attack. We have a unique opportunity at this time, before machine learning is widely deployed in critical systems, to develop the theory and practice needed for robust learning algorithms that provide rigorous and meaningful guarantees."

In addition to Penn State University, the CTML collaborating institutions include Stanford University, University of Virginia, University of California-Berkeley, University of California-San Diego, and the University of Wisconsin-Madison.

In Fiscal Year (FY) 2018, NSF invested more than $160 million agency-wide to establish the science of security and privacy, transition promising research results to practice, bolster security and privacy education and training, and minimize the misuse of cyber technologies and systems.