Dependability and security are of the utmost importance for computing systems. Due to the scale and complexity of current systems, both aspects are a permanent and growing concern in industry and academia. On the one hand, the volume and diversity of functional and non-functional data, including open source information, along with increasingly dynamical operating environments, create additional obstacles to the dependability and security of systems. On the other hand, it creates an information rich environment that, leveraged by techniques from modern data science, machine and statistical learning, and visualization, will contribute to improve systems resilience in contexts of dynamic operating environments and unexpected operating conditions. As such, there is a strong demand for production-ready systems leveraging from data-centric solutions able to improve and, adaptively, maintain the dependability and security of computing systems.
The first Workshop on Data-Centric Dependability and Security. The workshop aims at providing researchers with a forum to exchange and discuss scientific contributions and open challenges, both theoretical and practical, related to the use of data-centric approaches that promote the dependability and cybersecurity of computing systems. We want to foster joint work and knowledge exchange between the dependability and security communities, and researchers and practitioners from areas such as machine and statistical learning, and data science and visualization. The workshop provides a forum for discussing novel trends in data-centric processing technologies and the role of such technologies in the development of resilient systems. It aims to discuss novel approaches for processing and analysing data generated by the systems as well as information gathered from open sources, leveraging from data science, machine and statistical learning techniques, and visualization. The workshop shall contribute to identify new application areas as well as open and future research problems, for data-centric approaches to system dependability and security.
Machine learning (ML) is increasingly used in critical domains such as health and wellness, criminal sentencing recommendations, commerce, transportation, human capital management, entertainment, and communication. The design of ML systems has mainly focused on developing models, algorithms, and datasets on which they are trained to demonstrate high accuracy for specific tasks such as object recognition and classification. Machine learning algorithms typically construct a model by training on a labeled training dataset and their performance is assessed based on the accuracy in predicting labels for unseen (but often similar) testing data. This is based on the assumption that the training dataset is representative of the inputs that the system will face in deployment. However, in practice there are a wide variety of unexpected accidental, as well as adversarially-crafted, perturbations on the ML inputs that might lead to violations of this assumption. ML algorithms are also often over-confident about their predictions when processing such unexpected inputs. This makes it difficult to deploy them in safety critical settings where one needs to be able to rely on the ML predictions to make decisions or revert back to a failsafe mode. Further, ML algorithms are often executed on special-purpose hardware accelerators, which may themselves be subject to faults. Thus, there is a growing concern regarding the reliability, safety, security, and accountability of machine learning systems.
The DSN Workshop on Dependable and Secure Machine Learning (DSML) is an open forum for researchers, practitioners, and regulatory experts, to present and discuss innovative ideas and practical techniques and tools for producing dependable and secure ML systems. A major goal of the workshop is to draw the attention of the research community to the problem of establishing guarantees of reliability, security, safety, and robustness for systems that incorporate increasingly complex ML models, and to the challenge of determining whether such systems can comply with requirements for safety-critical systems. A further goal is to build a research community at the intersection of machine learning and dependable and secure computing.
Over the last years, aerial and ground vehicles as well as mobile robot systems have been receiving an increased number of electronic components, connected through wireless networks and running embedded software. This strong integration between dedicated computing devices, the physical environment and networking, composes a Cyber-Physical System (CPS).
CPS have thus become part of common vehicles, accessible to everyone, such as automobiles or unmanned aerial vehicles (UAVs). Furthermore, as processing power increases and software becomes more sophisticated, these vehicles gain the ability to perform complex operations, becoming more autonomous, safe, efficient, adaptable, comfortable and usable. These are known as Intelligent Vehicles. This will be the fifth edition of the workshop, aiming at continuing the success of previous editions.
The vast range of open challenges to achieve Safety and Security in Intelligent Vehicles (with or without connection with the Internet)is a fundamental reason that justifies the numerous research initiatives and wide discussion on these matters, which we are currently observing everywhere. Therefore, the workshop will keep its focus on exploring the challenges and interdependencies between security, real-time, safety and certification, which emerge when introducing networked, autonomous and cooperative functionalities.
Advances in AI combined with sensors, actuators and embedded systems technologies has made it feasible to incorporate intelligence into software systems with the ability to control and adapt their behavior in real time. Designing AI systems, therefore, has become and will be a norm in the future. These systems are likely to be highly distributed across machine and network boundaries with the potential for any of their components to adapt in response to self-learning capabilities and contextual changes in their environments. Managing the complexity that comes with designing AI systems requires new insights, cognitive models and advanced software engineering techniques to handle a new class of requirements, ranging from data training, learning models, uncertainty, self-adaptability to safety and dependability. Due to their dynamic behaviors, it is also critical that these systems be rigorously tested for their functional correctness and cognitive capabilities as they continue to evolve and adapt to unforeseen environments.
The workshop seeks to build bridges between researchers from different, yet complementary disciplines, to establish the foundations for testability and dependability of AI systems, and to develop a holistic and system thinking approach to handle software engineering, artificial intelligence, and cognitive capabilities of AI human-centric systems.