The SPATIAL project aims at advancing trustworthy AI by providing innovative tools and frameworks prioritising privacy, security, and trustworthy considerations in AI model development and deployment. The SPATIAL project offers a comprehensive set of resources, including the SPATIAL Platform for deploying trustworthy AI applications, the COMPASS Framework for guiding responsible AI practices, and a tailored Educational Module that provides users with the knowledge to implement trustworthy AI solutions.
Through four diverse, real-world use cases, SPATIAL demonstrates the powerful impact of its tools across various industries. Each use case highlights how organisations leverage SPATIAL’s resources to tackle complex AI challenges while maintaining transparency and trustworthiness. Additionally, testimonials from key stakeholders illustrate how SPATIAL is reshaping the landscape of responsible AI with practical, impactful solutions.
This article delves into these use cases, SPATIAL solutions and testimonials, and it can serve as a starting point for creating policies aimed at promoting transparency and trust in AI.
USE CASES
-
UC1: Privacy-preserving AI on the Edge and beyond (Telefónica)
The SPATIAL project is pioneering ethical and privacy-preserving AI development through its Federated Learning as a Service (FLaaS) platform. FLaaS leverages Federated Learning (FL) to enable decentralized AI model training, where data remains on local devices, eliminating the need for centralized data transfers. Instead, FLaaS allows devices to train AI models locally and send only the model updates to a central server for aggregation, preserving privacy by default.
This distributed approach offers key benefits: enhanced data security and privacy, reduced latency, and lower bandwidth usage, making it ideal for edge environments. FLaaS is being applied across sectors such as healthcare, finance, and smart cities, where data privacy is paramount. By ensuring compliance with privacy regulations, FLaaS enables organizations to harness AI’s potential without compromising on data security. With the addition of differential privacy protections, FLaaS further safeguards against potential data exposure, offering a comprehensive and scalable solution for modern, trustworthy AI development.
Read more here.
-
UC2: Improving explainability, resilience and performance of cybersecurity analysis of 5G/4G/IoT networks
The SPATIAL project has developed three AI-powered security applications that align with the core phases of Intrusion Detection and Response, enabling real-time anomaly detection within 5G/IoT networks. The first application, Traffic Classification, characterizes network traffic to identify typical user activities, such as web browsing, chatting, or video streaming. Following this, the Attack Detection application distinguishes between malicious and legitimate traffic, providing a robust mechanism for identifying prevalent cyberattacks in 5G and IoT environments. Finally, the Root Cause Analysis (RCA) application uses a similarity-based machine learning approach to pinpoint the underlying causes of issues, allowing for swift identification of appropriate solutions.
To improve the performance, explainability, and resilience of these AI-driven security tools, SPATIAL addresses the following critical areas:
Deployment of 4G/5G/IoT Testbeds: SPATIAL incorporates real, private 4G/5G/IoT networks, complete with Security Analysis and comprehensive setup instructions.
XAI Framework for 5G/IoT Traffic Analytics: SPATIAL has designed an open-source framework with two main components: Network Traffic Analysis and Explainable AI (XAI) for Resiliency, integrated into the SPATIAL platform.
Through extensive testing, SPATIAL has demonstrated the robustness and reliability of these AI models, assessing metrics for accountability and resilience using both public and private datasets. This framework supports a diverse range of users, including network administrators, security analysts, IT teams, cybersecurity researchers, enterprises, organizations, and academic institutions, and serves as a valuable resource for experimentation, validation, teaching, and research in network security and AI.
Read more here.
-
UC3 Accountable AI in Emergency eCall System
SPATIAL Use Case 3 focuses on designing, developing, and integrating an accountable, AI-based emergency detection system within a next-generation emergency communication platform. This system is designed to automatically identify emergencies by analyzing data collected from IoT sensors using advanced AI technologies. Once an emergency is detected, an automated emergency call (eCall) is triggered to connect with a trained medical professional, enabling immediate medical assistance.
During the eCall, both the patient and the responding medical expert can access all relevant information, including sensor data, AI model decisions, and explanations detailing why the AI identified an emergency. This access allows the medical expert to quickly assess the situation, guide first responders more accurately, and implement appropriate medical interventions more effectively. The system’s explainability and accountability features in Use Case 3 are aimed at improving and accelerating the entire emergency response chain, ensuring faster and more informed decision-making.
Read more here.
-
UC4 Resilient Cybersecurity Analytics
SPATIAL Use Case 4 addresses the need for Trustworthy AI in cybersecurity, focusing on ensuring that AI models are ethical, reliable, transparent, and, crucially, robust against various attacks. As AI becomes central to technology and daily life, robust AI systems are essential, especially for cybersecurity, to maintain their functionality in the face of attacks or unexpected inputs.
Two main threats to model integrity are data poisoning and model evasion. In data poisoning, attackers manipulate training data to mislead models, while model evasion involves creating “adversarial examples” to trick AI systems into incorrect predictions. For instance, subtle changes to a stop sign could cause an AI system to misidentify it as a speed limit sign, illustrating how easily model integrity can be compromised. These vulnerabilities underscore the importance of resilience in AI, as attacks like these can erode trust and lead to significant consequences in safety-critical applications.
Read more here.
EDUCATIONAL MODULES
SPATIAL has developed two educational modules, “Trustworthy AI” and “Advanced Trustworthy AI,” designed to provide learners with essential and advanced skills for understanding and responsibly managing AI systems. These modules focus on critical aspects of AI’s role and impact, helping learners comprehend AI’s effects on society, the environment, markets, and democratic systems.
Through these courses, participants learn to identify and mitigate biases in AI decision-making, assess the importance of explainability and implement transparency-enhancing techniques. Additionally, they address critical security and privacy challenges in AI systems and explore practical ways to manage these risks, including understanding relevant legal obligations.
These educational resources represent SPATIAL’s commitment to promoting responsible AI use and fostering expertise in trustworthy AI practices.
Learn more here.
SPATIAL PLATFORM
The SPATIAL platform is an advanced tool designed to evaluate the trustworthiness of AI models through a set of services and metrics. By combining qualitative assessments of accountability, privacy, and resilience, the platform provides an elevated level of explainability and reliability for AI models used in SPATIAL’s various applications. Internally, it features a range of components, including quality metrics, explainability tools, and an interactive user interface. The quality metrics component evaluates key attributes such as accountability (measuring accuracy, confidence, and consistency), resilience (assessing impact), and privacy (using differential privacy). Explainability components make AI model decisions understandable to diverse stakeholders, while the interactive interface adapts results and insights to users’ specific needs, facilitating easier access and understanding of AI analyses.
Built on a microservice architecture, the SPATIAL platform supports modularity, scalability, and efficient validation. Each component operates independently as a microservice, enabling in-depth analysis of specific metrics, such as accountability or privacy, to create a comprehensive trustworthiness profile for AI models. This modular approach ensures that the platform is robust, adaptable, and ready for effective deployment in a variety of settings, empowering users to confidently evaluate and trust AI solutions.
Read more here.
THE COMPASS FRAMEWORK
The COMPASS framework is a structured tool designed to help organizations evaluate the technical potential and societal impact of AI systems, fostering the development of responsible and trustworthy AI development. By offering a roadmap, COMPASS guides AI developers and auditors in navigating the complex landscape of AI technologies and their social implications. This adaptable framework allows practitioners to customize the assessment process according to industry-specific needs and priorities through a self-assessment approach, aligning with SPATIAL’s design principles to highlight essential skills throughout the AI lifecycle.
COMPASS integrates key principles to enhance trust, fairness, and social benefit across all stages of AI deployment. It begins by defining the context of each AI system, considering all stakeholders involved and affected. It emphasizes openness and transparency, ensuring that AI systems are understandable to all stakeholders with clear documentation. Evaluation measures are developed to guarantee that AI systems operate fairly and reliably, with a solid commitment to protecting user privacy and data security throughout the lifecycle. Accountability is prioritized, holding AI systems and their creators responsible for their actions. Additionally, COMPASS embeds security and safety protocols to defend against attacks and promotes sustainability, maintaining long-term performance and environmental responsibility.
Read more here.
PODCAST:
One of the project’s communication achievements was the podcast.
With this podcast, the goal has been to bring the importance of trustworthy AI closer to the general public, as well as to discuss more complex topics that may be of interest to a more academic audience. The four topics covered in the podcast are:
Episode 1: New challenges for gender equality in AI
Episode 2: The impact of AI and machine learning on society’s wellbeing
Episode 3: The accountability of Artificial Intelligence-based systems
Episode 4: SPATIAL use cases
TESTIMONIALS
Insights from stakeholders highlight how SPATIAL is transforming the field of responsible AI through impactful solutions. Some questions to these stakeholders included: What excites you about SPATIAL’s approach to explainable AI? How might SPATIAL’s technologies influence the future of network security? Will AI play a larger role in privacy security with 6G compared to 5G? What are the primary privacy and security challenges for 6G? And, what should individuals be most concerned about regarding their privacy and security? You can watch the testimonials series here.
We also gathered feedback from the Advisory Board, which we have included here.
CONCLUSION and OUTLOOK
In summary, SPATIAL provides a novel and improved approach for the assessment of AI resilience, trust, and accountability. By providing better resilience, trust, and accountability, SPATIAL can long-term ensure greater trust in cybersecurity AI. On socio-economic impact, the SPATIAL educational modules enhance the technical skills, ethical and socio-legal awareness for current and future AI engineers/developers. This will create long-term socio-economic impact for the EU as accountable development of AI security solutions can be better ensured. On societal impact, the SPATIAL COMPASS framework is a solid contribution, which has an impact on enabling accountable and transparent understanding of AI applications for users, software developers and security service providers. As to be used by more AI practitioners, this framework can facilitate better alignment with current and future relevant EU policies
For more information about the SPATIAL achievements, such as design patterns and principles, you can check the details here, and watch our video!