The SPATIAL platform estimates AI trustworthiness based on a combination of metrics deployed in the form of services to qualitatively analyze and assess AI models and SPATIAL use cases. The platform aims to provide a higher level of explainable AI solutions based on accountability, privacy, and resilience. Internally, it comprises several key components for achieving its design goals.
The platform consists of various components, such as the quality metrics components, the explainability components, and the interactive interface components. The quality metrics components are cascaded to various services that evaluate the AI model accountability (using accuracy, effectiveness, confidence, compacity, and consistency), resilience (using impact metric), and privacy (using differential privacy). The explainability components reveal the internal workings of the AI models through various methods, making the decision-making process understandable to different stakeholders in a concise and easy-to-understand way. The interactive Interface component is an adaptive user interactive that enables stakeholders to engage with the platform. It adapts information (result and analysis) provision to the user’s profile. Insights from quality metric analysis, explanations, and models within the platform are transformed into comprehensive reports. These components enable SPATIAL to provide a powerful and versatile platform that empowers users to assess and understand the trustworthiness of AI models confidently.
Structurally, the SPATIAL platform is designed based on microservice architecture, allowing for each component to have a collection of independent services running individual processes to achieve the design goals of the platform. Additionally, this design choice gives room for modularity, scalability, and efficient validation of various properties, contributing to the overall trustworthiness of the AI model before deployment. For instance, the quality metric component can be further cascaded into dedicated microservices for accountability metrics, resilience metrics, and privacy metrics. Each microservice can then perform an in-depth analysis of its specific aspect, contributing to a comprehensive assessment of the AI model.
Functionalities:
The SPATIAL platform is designed and developed to simplify the process of building and refining AI models. Its model-building functionalities allow for the construction and building of AI models within the platform. One of its standout features is the Explainable AI (XAI) service, which uses various methods to make the AI’s decision-making process transparent. This transparency allows users to understand how AI models arrive at their conclusions, enhancing trust in the technology. By providing clear and adaptive explanations through the integrated LLM service, developers can better explain results to different users, boosting their confidence in the AI’s methods.
SPATIAL also includes quality metric services that enable model refinement to meet regulatory standards. The platform’s fairness service examines training data to identify and address potential biases, ensuring the AI models produce fair and unbiased results. Additionally, SPATIAL offers a privacy service to ensure that models comply with data privacy regulations, protect sensitive information, and promote responsible data handling practices. The accountability service within SPATIAL evaluates key aspects of AI models, such as accuracy, transparency, and traceability. This comprehensive assessment fosters trust and allows users to understand the decision-making processes of their AI models. Together, these services make SPATIAL a reliable and ethical choice for developing AI solutions.