Adversarial Robustness on Artificial Intelligence Book in Scopus uri icon

abstract

  • We have witnessed many striking and promising applications of Artificial Intelligence (AI) in recent years, ranging from the medical applications to autonomous driving systems. However, in many critical applications, it is crucial to assess whether a machine learning model can be deemed durable and trustworthy, as we strive to deploy them outside of virtual and controlled domains. Such an analysis should focus less on the model¿s accuracy or the fact that it generally works and considers corner cases that could have disastrous consequences. One instance in which deep learning (DL) models, a particular type of AI, have shown vulnerabilities pertains to what has been dubbed adversarial attacks. Such attacks take advantage of corner cases (i.e., vulnerabilities in the model) in order to derail its predictive behavior. Due to these limitations of DL models, there is a significant need for reliable and exacting techniques to assess the robustness of neural network models. One of the most active areas of research for addressing these issues is adversarial robustness, a field that deals with the dependability of a neural network when coping with deliberately altered inputs. In this chapter, we outline some of the main issues currently plaguing neural networks with respect to adversarial robustness. We then discuss some of the requirements for building robust models and explore areas of opportunity for future research. © 2024 selection and editorial matter, Manuel Cebral-Loureda, Elvira G. Rincón-Flores and Gildardo Sanchez-Ante.

publication date

  • January 1, 2023