Achieving trustworthy Artificial Intelligence (AI) is a top priority for Europe.

Uncertainties in AI and opacity of data algorithms are demanding more efforts towards data privacy, resilience engineering, and legal-ethical accountability.
This can be done by maximising the benefits of applying AI in numerous engineering and analytic solutions while preventing and minimising the risks.
In safety critical domains such as cybersecurity, trustworthy AI can enhance security analytics and foster more competitive offerings of secure products and services.

SPATIAL will tackle the identified gaps of data issues and black-box AI by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity.

OUR GOALS

Transparency and Explainability

To develop systematic verification and validation software/hardware mechanisms that ensure AI transparency and explainability in security solution development

Resilience and Privacy

To develop system solutions, platforms, and standards that enhance resilience in the training and deployment of AI in decentralized, uncontrolled environments

Societal Impact for Uptake

To define effective and practical organizational adoption and adaptation guidelines to ensure streamlined implementation of trustworthy AI solutions

Education and Skills Building

To create an educational module that provides social-technical skills and ethical socio-legal awareness to current and future AI engineers to ensure accountable security solutions development

Transparent Ai applications understanding

To develop communication framework that enables accountable and transparent understanding of AI applications between users and service providers

OUR METHODOLOGY

Our methodology is built upon the Design Science Research (DSR) method and comprises four phases.

PHASE 1

Requirements collection and design goal construction of the expected security and privacy features of intelligent systems for end-users / developers.

MAIN ACTIVITIES

  • Capture the requirements and general design principles for modern system architectures based on accountable AI
  • Propose resilient accountability metrics and embedding them into the existing AI algorithms

PHASE 2

The design and initial implementation of resilient and accountable intelligent systems.

MAIN ACTIVITIES

  • Research and development of the technical components and building blocks of the SPATIAL project

PHASE 3

The deployment and testing (with end-users) of the proposed intelligent systems in the large-scale distributed infrastructure.

MAIN ACTIVITIES

  • Deployment and integration of the technical components developed during the 2nd phase
  • Technical validation and feasibility evaluation of the produced security assets
  • Deployment and validation of testing infrastructure environment
  • 5G Security Use Cases

PHASE 4

Impact, outreach, and collaboration.

MAIN ACTIVITIES

  • Building synergies with a range of target groups, including relevant H2020 initiatives, networks and bodies
  • Raise awareness about the outcomes of the project, promoting the activities and results among a critical mass

SPATIAL takes its orientation towards state-of-the-art concerns and practices and then targets practical use cases concerning AI in ICT systems and cybersecurity.

OUR FIGURES

SPATIAL builds the pathway toward a trustworthy European cybersecurity sector, developing trustworthy governance and regulatory framework for AI-driven security in Europe.

PILOTS
to deploy and test SPATIAL developed technologies

PARTNERS
Consortium from 8 countries across Europe

OPEN SOURCE LIBRARIES
on privacy preserving ML

GROUPS
to participate in standardization

ALGORITHMS
to investigate in the field of trustworthy AI

SCIENTIFIC PAPERS
to publish in the field of trustworthy AI