Description of the use case
This post is the first of a series of several posts related to our validation cases or pilots. Starting from the pilot led by Montimage there will be four different use cases entitled to demonstrate different elements of trust of Artificial Intelligence (AI). In this case improving the explainability, resilience and performance of cybersecurity analysis of 5G and IoT networks.
What is the pilot for?
This use case will evaluate the guidelines and techniques proposed by the SPATIAL project related to the explainability, resiliency and distribution of Artificial Intelligence and Machine Learning (AI/ML) techniques. This will be done by assessing the techniques used today by Montimage in its open source MMT monitoring framework for performing cybersecurity analysis and protection of Internet of Things (IoT) networks, and 5th Generation (5G) and beyond mobile networks.
Why is it useful for Montimage?
The SPATIAL project will be decisive for rendering the AI/ML techniques used by Montimage more transparent, resilient to adversarial attacks, particularly concerning Montimage’s encrypted traffic and root cause analysis (RCA) ML algorithms. The metrics defined to measure accountability and resilience will be used to confirm their pertinence and applicability, and then measure the improvements made. Montimage will provide a testbed pilot to integrate and assess the improved techniques.
What tangible results are expected from it? (150 words)
Montimage’s open source and commercial security monitoring and enforcement framework for 5G/IoT networks (MMT), and its fully portable 5G mobile network solution (5G-in-a-Box) will benefit from the results of SPATIAL by greatly improving the AI/ML techniques used, particularly regarding robustness to attacks and improved explainability. The metrics developed for the assessment of AI/ML techniques will allow optimising them using distributed ML data and processing, as well as extending the training, penetration testing and cyber range capabilities.