In the SPATIAL project, we just celebrated the end of our second year of work, and we would like to share with our community the main goals achieved during these exciting two years!

What is the SPATIAL project about?

The SPATIAL project seeks to address research gaps in trustworthy AI of cybersecurity.

Our work focuses on five main objectives:

  1. To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development.
  2. To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralised, uncontrolled environments.
  3. To define effective and practical adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions.
  4. To create an educational module that provides technical skills and ethical and socio-legal awareness to current and future AI engineers/developers to ensure the accountable development of security solutions.
  5. To develop a communication framework that enables an accountable and transparent understanding of AI applications for users, software developers and security service providers.

In cybersecurity, SPATIAL’s objectives cover securing AI-driven ICT systems and enhancing AI-empowered security solutions regarding accountability, privacy and resilience.

The first “how to” directly concerns the research gap of data bias/poisoning, while the second “how to” targets the research gap of black-box AI in security solutions. The outlined challenges range from the underlying hardware level (e.g., Trusted Execution Environment) to the higher algorithmic/software level.

Our achievements through our objectives.

Objective 1:  

  • SPATIAL has investigated and derived essential requirements and methods for systematic verification and validation of AI transparency and explainability, as illustrated in submitted 1 and D1.2.
  • Work in WP2 has ensured that we have a first set of accountability, resiliency and privacy metrics that can be further refined in the second half of the project. The deliverable “D2.3: Process to Integrate Accountability and Resilience Features into AI Algorithms” is the direct output of this task, which was already finished and submitted on the 31st of August 2023. Specifically, we have proposed a process to embed the accountability and resilience features of the existing AI-based cybersecurity solutions. The effectiveness of this proposed process has been tested initially on each of four SPATIAL use cases: accountability analysis on MI’s network traffic classification, privacy-preserving analysis on TID’s federated learning use case, resilience analysis on WSC’s maldoc detection application, and FOKUS’s myocardial infarctions (MIs) detection based on a patient’s electrocardiogram (ECG) data for time-series accountability analysis.
  • Work in WP2 and WP3 has further gone toward testing the SPATIAL process to integrate our metrics into accountable algorithms for improved cybersecurity. One pilot testbed has been utilised in the project’s first two years. Further testing on other pilots will occur in the last year of the project.

Objective 2: 

  • SPATIAL in RP1 has investigated several data trade-offs to achieve trustworthy AI as models are built in a distributed environment, covering different benchmarks and the analysis of four different application scenarios, as summarised in 1.
  • The platform structure has been conceptualised, and initial components, such as the API gateway, have undergone testing and reached the demo stage.

Objective 3

  • A socio-technical analysis framework has been created, and an embedded field analysis has yielded initial results to be further evaluated in the early phase of the project for deriving insights into the practicalities of the multi-sited ethnographic analysis of AI tool development, as reflected in 1. These results will then aid in constructing developmental guidelines to be disseminated by the project.
  • Additionally, user studies involving medical experts and end-users (patients) have been carried out at TUD. These results will also be integrated into the project workflow and deliverables in the coming months.

Objective 4:

  • We launched the EDUCATIONAL MODULE for researchers, students, industry experts and professionals working on different types of businesses.

To cover different needs and types of knowledge about AI, from more experts to people with basic knowledge, we designed two courses: a basic course for knowledge workers in AI and a technical one dedicated to experts in the field.

In addition, whilst the modules work as standalone courses, they are designed to be a series worth 2 ECTS credits. This means universities can include these courses in their curricula.

The course is available here: https://www.minnalearn.com/trustworthy-ai/

Objective 5:

  • Dissemination and communication activities were highly reinforced during the second year of the project, publishing a wide range of SCIENTIFIC PUBLICATIONS, participation in events, and organisation of a very successful workshop in Dublin
  • (SECURENET 2023).
  • Last but not least, we published two very interesting episodes of our podcast:
  1. The Impact of AI and Machine Learning on Society’s Wellbeing
  2. New Challenges for Gender Equality in AI: Navigating the Ethical and Social Implications

Curious about our next steps? Stay tuned to our channels and learn more! Newsletter, Twitter & LinkedIn.