Quality of AI-enabled systems (Q4AI) is recognized as a difficult challenge in both research and practice. Many of these challenges are driven by the data-dependent nature of AI components in which functionality is determined by characteristics (features) of training and operational data and not by traditional component specifications from which test cases are often derived. This data-dependency also causes AI components to drift over time as characteristics of operational data change over time, therefore requiring QA activities, such as runtime monitoring to be essential components of AI-enabled systems.
A complementary aspect of Quality in the Age of AI is the use of AI to support Quality activities and processes (AI4Q), such as using AI techniques for test data and test case generation, fault localization in source code, and analyzing runtime log data to identify problems and courses of action. Challenges in this area stem from the lack of high quality and quantity of training data and oracles that are important for model performance and accuracy.
With the increase in complexity, size, and ubiquity of AI-enabled systems, as well as advances in AI including the growing popularity of large language models (LLMs), it is necessary to continue exploring Quality in the Age of AI. We therefore seek novel contributions investigating advances in both Q4AI and AI4Q.
Recent advances in artificial intelligence (AI), especially in machine learning (ML), deep learning (DL) and the underlying data engineering techniques, as well as their integration into software-based systems of all domains raises new challenges to engineering modern AI-based systems. This makes the investigation of quality aspects in machine learning, AI and data analytics an essential topic. AI-based systems are data-intensive, continuously evolving, and self-adapting, which leads to new constructive and analytical quality assurance approaches to guarantee their quality during development and operation in live environments. On the constructive side, for instance, new process models, requirements engineering approaches or continuous integration and deployment models like MLOps are needed. On the analytical side, for instance, new data, offline and online testing approaches are needed for AI-based systems.
The scope of this track is Quality in the Age of AI. The topics of interest include, but are not limited to:
Quality of AI-enabled Systems:
Elicitation and specification of quality requirements for AI systems
Testing techniques for AI components and systems
Data quality processes
Tools to support software quality activities in AI systems
Runtime monitoring of AI systems
Certification processes for AI components and systems
Quality metrics for AI systems and components
AI Supporting Software Quality Processes
AI for test case generation
AI for test data generation
AI for quality requirements generation
AI for runtime log analysis
AI for fault localization
Analytical and constructive quality assurance for AI-based systems
System and software architecture of AI-based systems
Data management and quality for AI-based systems
Data, offline and online testing approaches
Runtime monitoring, coverage and trace analysis of data, models and code
Development processes and organization for machine learning, AI and data analytics
Non-functional quality aspects of AI-based systems
Quality models, standards and guidelines for developing AI-based systems
Empirical studies on quality aspects in machine learning, AI, and data analytics
Chairs: Gemma Catolino, University of Salerno, Italy and Fabio Palomba, University of Salerno, Italy
Program Committee:
Gemma Catolino, University of Salerno, Italy
Gemma Catolino is an Assistant Professor at the Software Engineering (SeSa) Lab (within the Department of Computer Science) of the University of Salerno. In 2020, she received the European Ph.D. Degree from the University of Salerno, advised by Prof. Filomena Ferrucci. She received (magna cum laude) the Master's Degree in Management and Information Technology from the University of Salerno (Italy) in 2016 defending a thesis on Software Quality Metrics, advised by Prof. Filomena Ferrucci. She got the Bachelor's Degree in Computer Science from the University of Molise in 2014 defending a thesis on Software Program Comprehension.
Fabio Palomba, University of Salerno, Italy
Fabio Palomba is an Associate Professor at the Software Engineering (SeSa) Lab (within the Department of Computer Science) of the University of Salerno. He received the European PhD degree in Management & Information Technology in 2017. His PhD Thesis was the recipient of the 2017 IEEE Computer Society Best PhD Thesis Award.