Microsoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems.
Called the Adversarial ML Threat Matrix, the initiative is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems.
Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.
Indeed, ESET researchers last year found Emotet — a notorious email-based malware behind several botnet-driven spam campaigns and ransomware attacks — to be using ML to improve its targeting.
Then earlier this month, Microsoft warned about a new Android ransomware strain that included a machine learning model that, while yet to be integrated into the malware, could be used to fit the ransom note image within the screen of the mobile device without any distortion.
What’s more, researchers have studied what’s called model-inversion attacks, wherein access to a model is abused to infer information about the training data.
According to a Gartner report cited by Microsoft, 30% of all AI cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.
“Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning,” the Windows maker said. “Twenty-five out of the 28 businesses indicated that they don’t have the right tools in place to secure their ML systems.”
Adversarial ML Threat Matrix hopes to address threats against data weaponization of data with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against ML systems.
The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models’ resilience by simulating realistic attack scenarios using a list of tactics to gain initial access to the environment, execute unsafe ML models, contaminate training data, and exfiltrate sensitive information via model stealing attacks.
“The goal of the Adversarial ML Threat Matrix is to position attacks on ML systems in a framework that security analysts can orient themselves in these new and upcoming threats,” Microsoft said.
“The matrix is structured like the ATT&CK framework, owing to its wide adoption among the security analyst community – this way, security analysts do not have to learn a new or different framework to learn about threats to ML systems.”
The development is the latest in a series of moves undertaken to secure AI from data poisoning and model evasion attacks. It’s worth noting that researchers from John Hopkins University developed a framework dubbed TrojAI designed to thwart trojan attacks, in which a model is modified to respond to input triggers that cause it to infer an incorrect response.