Fabrizio Ruggeri is Research Director at the National Research Council Istituto di Matematica Applicata e Tecnologie Informatiche (CNR-IMATI) in Milan, Italy. His work focusses on Bayesian methods, specifically robustness and stochastic process inference. He has done innovative work on sensitivity of Bayesian methods and incompletely specified priors. He has worked on Bayesian wavelet methods, and on a vast variety of applications to industrial problems. His publications include well over 150 refereed papers and book chapters, as well as five books. Ruggeri received his PhD from Duke University in 1994. He is an Elected Member of the International Statistical Institute and a Fellow of the American Statistical Association, the International Society for Bayesian Analysis (ISBA) and the Institute of Mathematical statistics. He is the first recipient of the Zellner Medal by ISBA. He was Past Vice-President of the International Statistical Institute, President of the International Society for Business and Industrial Statistics 2019-21, President of the International Society for Bayesian Analysis in 2012 and the European Network for Business and Industrial Statistics in 2005-6. Currently, he is President-Elect 2023-25 and President 2025-27 of the International Statistical Institute. He is Editor-in-Chief of Applied Stochastic Models in Business and Industry and of Statistics Reference Online.
Abstract: In multiple domains such as malware detection, automated driving systems, or fraud detection, classification algorithms are susceptible to being attacked by malicious agents willing to perturb the value of instance covariates in search of certain goals. Such problems pertain to the field of adversarial machine learning and have been mainly dealt with, perhaps implicitly, through game-theoretic ideas with strong underlying common knowledge assumptions. These are not realistic in numerous application domains in relation to security. We present an alternative statistical framework that accounts for the lack of knowledge about the attacker’s behavior using adversarial risk analysis concepts.