Let’s analyse WP1 – It aims to capture the requirements and general design principles for modern system architectures based on AI, where the belonging algorithms should be designed and applied in an explainable manner in order to support the accountability of the overall AI-based system. Michell Boerger and Niko Tcholtchev, from Fraunhofer Fokus are the WP1 leaders.

The outcome of this WP will be reported in 4 deliverables and contribute to SPATIAL’s 2 milestones.

  • Main activities carried out in the first project period (please be specific)

The WP1 – Requirement and threat modelling is a fundamental work package of the SPATIAL project and constitutes the foundation for other activities in the project by providing valuable insights for the design of explainable, accountable, and secure AI-based systems. To achieve this, one of the main activities in WP1 during the first half of the SPATIAL project was to capture the requirements and general design principles for modern system architectures based on AI. These tangible requirements and design guidelines were compiled into an exhaustive and comprehensive catalogue of aspects to be considered when integrating and utilising AI algorithms and frameworks for addressing security risks and threats to system ar chitectures. The provided list of 85 requirements represents valuable recommendations for designers, developers, and operators of AI-based systems for addressing explainability, accountability, and security risks when integrating and utilising AI algorithms in their systems. Interested readers can access the final list of requirements in the publicly available Deliverable D1.3 “Final Requirements Analysis for AI towards Addressing Security Risks and Threats to System and Network Architectures”.

Besides, WP1 also investigated the security aspects of AI-based systems. Precisely, during the work conducted in WP1, the partners identified potential attack scenarios and threats, which should be considered and mitigated during the design of an AI-based system with its belonging AI algorithms. Precisely, a threat analysis was conducted to identify potential threats specific to AI-based systems on the levels of algorithms, architectures, and implementation. In this context, the accomplished threat analysis mainly focussed on attacks which aim to manipulate or make AI algorithms, their implementations, or ways of their utilisation functioning incorrectly or even leading to security breaches or overall system unavailability. The results of the threat analysis were compiled into the publicly accessible Deliverable D1.2 Security Threats modelling for AI based System Architectures“, which provides a comprehensive list of the main algorithmic, supply chain, and deployment vulnerabilities exposed by machine learning-based systems. Furthermore, the deliverable also contains a threat modelling of both centralised and distributed training and inference paradigms.

  • How is this WP linked with the others? How are the activities carried out in this WP beneficial for the other project tasks?

The compiled catalogue of requirements and design guidelines as well as the results of the performed threat modelling and derived security recommendations are highly relevant for other activities of the SPATIAL project. The relationship and interaction between the WP1 activities and those of other SPATIAL WPs is illustrated in the figure below. This figure also visualises the fully integrative and agile knowledge transfer processes between the work packages established within the SPATIAL project.

 

 

As shown in the above figure, the design guidelines and requirements provided in WP1 serve as the foundation for the technical activities in WP2, WP3, and WP5. On the one hand, the requirements and design guidelines specified in this document are highly relevant for the four SPATIAL use cases as these are representative pilots for the investigated AI-based systems. The provided requirements and security recommendations support the use case developers towards achieving an accountable and resilient realisation of their developed AI-based systems. On the other hand, the exhaustive requirements catalogue also includes highly relevant aspects for shaping the design and specification of the Explanatory AI Platform developed in WP3. These requirements represent high-level recommendations or accountability needs that the platform should address to be beneficial for the four SPATIAL use cases or general AI-based systems. Therefore, the activities in WP1 also steer the implementation of the SPATIAL Explanatory Platform towards a successful realisation.

Interestingly, the figure also illustrates how the other WPs have influenced the activities in WP1. As shown, the feedback received from other technical activities, such as the development of the SPATIAL use cases (WP5) and explanatory AI platform (WP3), were utilised to update the listed requirements iteratively and agilely. This approach ensures that the knowledge gained during the project execution is reflected in the captured requirements, resulting in more robust and practical recommendations and design guidelines.

  • What is the part of this WP that can be the most impactful?

The manifold activities carried out in WP1 are all equally important for achieving accountable and secure AI systems. The activities examine this topic from a different angle or level of detail, meaning that no one activity stands out as most impactful. However, it is to be expected that the activities performed in WP1 and the outcomes provided can have a significant impact on achieving trustworthy AI in Europe. For example, the provided requirements catalogue can impact the AI design and development process as it provides valuable recommendations for designers, developers, and operators of AI-based systems for addressing explainability, accountability, and security risks when integrating and utilising AI algorithms in their systems. Similarly, design patterns for AI architectures are currently developed in WP1 (see below). These design patterns could also have a significant impact as they complement gathered requirements towards practical implementation details and, thus, guide the AI implementation process towards trustworthiness. Furthermore, the provided results of the conducted threat modelling will also highly influence the trustworthiness of AI systems. Precisely, as the results provide clear recommendations for addressing the security risks of AI-based systems, they steer the AI development and deployment process towards a more secure realisation.

  • What are your plans for the remaining part of the project?

In the remaining time of the SPATIAL project, two essential objectives will be pursued in WP1. On the one hand, we intend to implement a requirements tracing process in WP1. Precisely, we aim to track the fulfilment of the requirements and thus ensure that the identified requirements are taken into account in the implementation of the four SPATIAL use cases and the SPATIAL Explanatory AI Platform. In this process, we also want to support these technical activities and assist with their implementation.

On the other hand, the final objective of WP1 is to identify design patterns for accountable and resilient AI architectures. This activity is currently being carried out under the guidance of the University College of Dublin. The design patterns will be based on the insights obtained and knowledge gained from realising the technical activities in SPATIAL and are intended to help AI developers transition from broad, conceptual requirements to detailed, practical implementation and guide AI development towards trustworthiness. Precisely, the patterns will provide clear guidance for developing AI systems in line with Trustworthy AI objectives, streamlining the development process, reducing the risks associated with complex AI systems, and building stakeholder trust.