Do you know that TU Delft is the oldest and largest technical university in the Netherlands? Counts with over 28.000 students and 6.000 employees who both share a fascination for science, design and technology.
This fascination has been transferred to SPATIAL project, in which they play the leading role of this top-notch project focused in achieving trustworthy, transparent and explainable AI for cybersecurity solutions.
More importantly, it is the Cyber-Physical Intelligence Laboratory (CPI Lab) within TU Delft, responsible for the project. With a remarkable set of awards and research grants, it is part of the Department of Engineering Systems and Services at TU Delft. The lab focuses on Edge AI solutions for cyber-physical systems in a myriad of domains. The projects run by CPI Lab are at the intersection of distributed computing, data intelligence, networking and security.
CPI Lab is led by Dr. Aaron Ding, who is the coordinator of the SPATIAL project. Dr. Ding is also part of one of the prestigious Marie Curie ITN projects funded by the European Commission
How did you start SPATIAL?
The initial idea of SPATIAL started from a conversation back in 2019 with my colleague in Rotterdam, Jason Pridmore, to jointly look into the rising challenge of embedding (opaque) AI into IoT and cybersecurity solutions. Since the challenge is essentially a socio-technical problem, matching nicely with our backgrounds (my lab on technology R&D and Jason’s group on social science), I started to build the consortium that leads to SPATIAL.
What are your expectations in a project of this nature?
My main expectation in SPATIAL is to effectively enhance AI deployment and data governance in cybersecurity. This expectation is both ambitious and pragmatic, given the strong consortium of SPATIAL and a high demand by the EU to safeguard upcoming AI-driven safety and security services.
What can the research community expect from SPATIAL?
In a nutshell, two parts can be expected: 1) solid building blocks to develop trustworthy governance and regulatory framework in AI-driven security, in terms of evaluation metrics, verification tools and system framework. 2) dedicated education modules for trustworthy AI in cybersecurity.
Where do you see SPATIAL results in 10 years?
In the ICT domain, lots of things can emerge and change in 10 years. I see SPATIAL results will go beyond a pure technical pursuit and further pave our way to achieve trustworthy AI on both societal and technology fronts.