18 months ago, a Consortium of 12 partners from 8 EU countries, led by TU Delft, joined forces to carry out the following mission: build the pathway toward a trustworthy European cybersecurity sector, enabling trustworthy governance and regulatory framework for AI-driven security in Europe.

Since then, much work has happened, and we have achieved the first results, setting the pace for the project’s next phase.

THE CONTEXT

Achieving trustworthy AI is a top priority for European Union, often hindered by the opacity of algorithms in software and hardware operations. SPATIAL solves such uncertainty by enforcing data privacy, resilience engineering, and legal-ethical accountability.

The SPATIAL project focuses on trust in cybersecurity AI, aiming to influence effective regulation, governance, standardisation processes, and procedures for AI usage in ICT systems. The SPATIAL project tackles the identified gaps of data issues and black-box AI by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity.

Besides technical measures, the SPATIAL project aims to facilitate generating appropriate skills and education for AI to strike a balance between technological complexity, societal difficulty and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability in three pillars towards trustworthy AI.

SPATIAL will provide solid building blocks to enable trustworthy governance and regulatory framework in AI-driven security in terms of evaluation metrics, verification tools and system framework. In addition, the project will generate dedicated education modules for trustworthy AI in cybersecurity. The contributions of SPATIAL on both social and technical aspects will serve as a stepping stone to establishing an appropriate governance and regulatory framework in Europe.

WHAT WE HAVE DONE

The SPATIAL project seeks to address research gaps in trustworthy AI of cybersecurity, working towards five main objectives:

  • To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development.
  • To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments.
  • To define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions.
  • To create an educational module that provides technical skills, ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions.
  • To develop a communication framework that enables an accountable and transparent understanding of AI applications for users, software developers and security service providers.

In the cybersecurity domain, the objectives of SPATIAL cover both how to secure AI-driven ICT systems and how to enhance AI-empowered security solutions in terms of accountability, privacy and resilience. The first “how to” directly concerns the research gap of data bias/poisoning, while the second “how to” targets the research gap of black-box AI in security solutions. The outlined challenges range from the underlying hardware level (e.g., Trusted Execution Environment) to the higher algorithmic/software level.

The work towards SPATIAL’s objectives is structured in 5 technical WPs, 1 WP on project management, 1 WP on impact and exploitation, and 1 WP on ethics requirements.

In the first 18 months of the project, all planned milestones for the period were achieved. The main results can be summarised as follows:

In WP1, we worked towards a compilation of requirements and design guidelines for modern system architectures based on accountable and robust AI. The main goal was to provide valuable recommendations for designers, developers, and operators of AI-based systems for addressing explainability, accountability, and security risks when integrating and utilizing AI algorithms in their systems. In addition, the four SPATIAL use cases that reflect the four technical contexts targeted in the SPATIAL project (Mobile Edge Systems, Cybersecurity Applications and Analytics, IoT, and emergency eCall system) play an essential role in the collection of the aspects and general design principles when integrating and utilizing AI algorithms and frameworks. Lastly, based on a comprehensive literature review, further requirements and recommendations were identified that are particularly relevant in the context of SPATIAL.

In WP2, a comprehensive analysis of explainability, accountability, and resilience was performed. D2.1 briefly introduced the existing commonly used AI and XAI methods and analysed these AI methods’ accountability and resilience requirements in general and the four SPATIAL use cases. Furthermore, we analysed the parameters of the existing privacy, accountability and resilience solutions and the possible trade-offs among these three aspects. Most importantly, according to that pre-analysis, several metrics for accountability, resilience, and privacy-preserving of the AI-based cybersecurity solutions were proposed (D2.1).

In WP3, we gave a rigorous analysis of the different trade-offs between data quality and model performance and proposed mechanisms to detect and improve data quality for model training; in particular, these methods focus on improving fairness and transparency (D3.1).

In WP4, we created a framework for studying SPATIAL partners’ development processes to enable future evaluations of the social, legal, and technical limitations faced in process development and the gender and diversity considerations and how any challenges could be overcome (D4.1). Much effort in WP4 was dedicated to setting up the SPATIAL EDUCATIONAL MODULE, built upon the successful MOOC “Elements of AI” by ML.

The SPATIAL EDUCATIONAL MODULE is designing a 2 ECTS MOOC focusing on socio-technical skills and ethical socio-legal awareness. The modules will have a structure designed to train engineering students, professionals, and students and experts from other fields with a technical background interested in learning the topic. The research and development of SPATIAL tools form the basis of the digital modules, and project partners actively contribute to the course development. The initial content phase of the module has already been completed, and feedback from alpha testing is undergoing implementation.

In WP5, we worked to provide an overview of each use case and a concrete plan for achieving the goals established in the work package. The use cases are the backbone of SPATIAL’s development and the initial test beds for our research output. They will demonstrate the innovations from SPATIAL through diverse compelling scenarios in the real world. The four use cases include (1) privacy-preserving AI on the edge and beyond, (2) explainability, resilience and performance of cybersecurity in 4G/5G/6G and Internet of Things (IoT) networks, (3) accountable AI in next-generation emergency communication and (4) resilient cybersecurity analysis based on machine learning models. In D5.1, each use case gave a thorough overview of their main technical elements and their respective implementation and experimentation plans.

In WP6, we published 15 scientific publications (in journals and conferences), built up a solid online community with more than 1500 followers on social media, and we promoted the project in events, workshops and via dedicated blog posts on our website.

 

 

 

 

This is just the beginning of our journey; follow us on Twitter and LinkedIn to stay updated.