ONE YEAR ON: achievements, lessons learnt and the way forward

Happy birthday SPATIAL! One year of achievements, challenges, confrontations and new ambitions.

Uncertainties in AI and opacity of data algorithms are demanding more efforts towards data privacy, resilience engineering, and legal-ethical accountability.

This poses the challenge of maximising the benefits of applying AI in numerous engineering and analytic solutions while preventing and minimising the risks.

In safety critical domains such as cybersecurity, trustworthy AI can enhance security analytics and foster more competitive offerings of secure products and services.

This is why we launched our project SPATIAL one year ago, in order to tackle the identified gaps of data issues and black-box AI by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity.

Over the past year, the SPATIAL team achieved great results and set the pace for the work to be performed in the following year, as explained in the chapters below.

FIRST YEAR ACHIEVEMENTS

In the first year since its commencement, the SPATIAL project has made steady progress in its goals towards trustworthy AI in cybersecurity.

We have made great strides in our objective towards addressing security risks and threats to system architectures through an extensive requirements analysis for the development of trustworthy AI implementations, as well as a threat analysis of the potentially harmful elements that such AI architectures and implementations may be subject to in regards to security and data quality. Related to this, the project members have also thoroughly analyzed accountability and resilience elements within deep neural networks and algorithmic frameworks currently deployed within the IoT, 5G and cybersecurity application domains. This foundational work is crucial for future project developments, where we will be forming defined explainability, accountability, privacy and resiliency metrics for such AI systems.

Additionally, the project has completed its development of a framework for socio-technological analysis, evaluating the social, legal and technical limitations and challenges that the project will face. This framework is to be utilized going forward in an internal embedded field analysis that is being carried out throughout the project. This field analysis, inspired by ethnographic research, aims to understand the project’s internal communication, practices and development environment and highlight crucial social aspects for increasing explainability and transparency in AI development.

The project has also seen its first wave of related scientific dissemination in the form of research papers being published and presented in both peer-reviewed journals and conferences. At this time, we are well on course to reach and bypass our publications target for international journals, while our conference target still needs a boost. However, we expect this to pick up quickly in the coming year because of your upcoming development and deployment stage (more on that below).

Finally, we have had a solid start to our outreach and promotional activities (which you are taking part in right now as you are reading this). Our social media community is steadily growing, and our website and newsletter are expanding in content. In the coming year, we also plan to focus more on event presence, promoting SPATIAL in both academic and industrial settings. This increased presence will also be helped by the coming year’s increased focus on development tasks and the launch of our project’s pilots.

LESSON LEARNT

Due to the COVID pandemic, physical meetings were off the table for most of the project’s first year. As such, our first consortium meeting in May of this year was handled entirely remotely. While this was not an ideal situation from a social standpoint, it taught us many lessons in how to best organize an efficient work plan and meeting schedule using online tools, and has in many ways made us better equipped for overcoming the geographical challenges that every multi-national consortium faces.

However, due to the saved travel costs and easing of restrictions, we were able to book an additional consortium meeting date later in September, suitably marking the end of the project’s first year. The location selected was Berlin, and it was here that the consortium members were finally able to meet each other face-to-face for the first time. This also taught us some valuable perspectives about the often overlooked differences in how we approach collaboration physically and online.

THE WAY FORWARD

In the year ahead, there are many new exciting developments for SPATIAL. For one, our pilot use cases are now taking off as the system architecture development and deployment, and demonstration stages are starting. This means that we have a very hectic year ahead of us, with many moving parts, and this year will test our organizational and collaborative skills. It will also be a year when our industrial project members will truly shine. This stage will also generate many more research results for publication and demonstration. We anticipate these factors will increase SPATIAL’s outreach presence and leave us with many more opportunities to spread the word about our work at conferences and conventions.

Additionally, with the addition of MinnaLearn to the project, we have started the concept phase for what will be our education module that is to be developed as part of the project. This module will allow us to utilize findings from the project and turn them into teachable content that can be used to disseminate our knowledge for educational purposes across institutions and organizations that will be using our module in their courses. Our goal is to provide a rich learning resource for educating future AI engineers and providing critical knowledge for policy, legal or ethical researchers who wish to understand the technical side of AI better. In doing so, we aim to facilitate the field of Explainable AI to mature and move on to the next stage in its development. At the same time, we want researchers and policymakers who hover around the AI stratosphere to tag along in this development to bridge rather than widen the gaps between these communities and AI engineers.

Stay tuned on our social media channels for more information and updates on our activities!

LinkedIn
Twitter