Creating accountable algorithms becomes complicated by the experience and expectations of developers as well as the structural features of the technology itself. So, WP4, titled “User Engagement, Acceptance and Practice Transitions” has the mission of revealing the general situation regarding this confusion and providing solutions to current and potential risks and challenges. In this work package, we focus on understanding the different experiences and expectations of the partners in the SPATIAL project in the process of developing an accountable algorithm.

Based on this, we aim to develop guidelines and frameworks so that AI systems can work more transparently, resiliently, and robustly by identifying problems that arise or have the potential to arise during the development process. The most important point here is that we do not carry out all our analyses only with theoretical and desktop studies. In other words, we don’t do our research in a vacuum, but we engage directly with our partners and get to see how they approach their work in SPATIAL. The interviews conducted within the scope of this work package enable AI developers to rethink their technical work from a social and ethical perspective. In this way, things that have been overlooked are seen more clearly, and issues that need to be improved are also revealed. In combination with this, we also want to share our insights with the broader AI/ML community and share best practices and guidelines to improve the development of trustworthy AI beyond our projects or partners. Our main objective is to approach SPATIAL work from a social, legal and ethical perspective.

  • Main activities carried out in the first project period.

We had two major tasks in the first period of the project. The first centred around an embedded analysis of technology development within the project. To do this, we observed various meetings with the whole consortium or specific to tasks in WP3. We also interviewed our partners to understand how they approach and conduct development, what factors come into play, and what supports or complicates their work. One of our key findings from this research is that audiences steer a lot of the work are relevant to the development of trustworthy AI. The audiences are other experts, end users, lay people, data subjects, society, and (AI) regulators (see figure). Some of these audiences may overlap or may have competing interests, which can certainly complicate the work.

 

 

Our second major activity in the first period was the development of the Trustworthy AI education modules. This work was managed by MinnaLearn, who has a proven track record of developing engaging and interesting courses. We initially planned for one education module but soon realised that we couldn’t effectively teach both technical and less technical audiences. As a result, we have two exciting courses that address these audiences separately. The first module, Trustworthy AI, is geared towards a more general audience, and the goal of the module is to understand the importance, considerations, and impacts of trustworthy AI. The second module, Advanced Trustworthy AI, targets a technical audience so that they can learn more about the principles, methods, and tools for implementing trustworthy AI in practice.

Aside from these core activities, we’ve also contributed to WP1, where we helped define key requirements for SPATIAL tools.

For WP4, we aim to go one step further with the contribution of the analysis and insights that we have obtained in the part of the project so far. Our next action point is to integrate sociotechnical auditing into the demonstration and deployment activities of SPATIAL outcomes. So, we will develop a broader notion of sociotechnical auditing that takes different audiences into account with a socio-technical approach as a mechanism that checks whether the entire development lifecycle stages of ML and AI systems meet ethical expectations and standards. With the help of all WP4 contributors, we aim to foster a culture among the developers that increases an understanding of and aims for responsible and ethical practices through open communication. Therefore, the next step can be evaluated to support developers in making more explicit how sociotechnical processes add trust and value, next to a more technical understanding of trustworthiness.

  • How is this WP linked with the others? How are the activities carried out in this WP beneficial for the other project tasks?

Our work integrates with many of the other work packages. While in some of them, we are directly involved in the work, such as in tasks for WP1, we make more indirect contributions to where we observed the development of WP3.

Our current work connects mostly to WP5. We are working on developing a framework to evaluate, validate and future-proof SPATIAL tools, and our reflections will help the demos and formulate best practices and guidelines for future applications of SPATIAL tools. In doing so, we first evaluate the socio-technical conditions under which SPATIAL tools will be developed by integrating architecture and configuration development in WP3. Therefore, we can say that WP4 has an organic connection to other work packages. It can be considered as a work package in which the work done on the project is evaluated and made visible. In other words, WP4 is a work package that it both feeds other work packages and is fed by them.

  • What is the part of this WP that can be the most impactful?

It’s difficult to choose! All our tasks have an impact in different ways. The impact of the education modules is the clearest, also because it’s already shared with the world and thus visible. We believe that the education modules will help any interested person to reflect on AI development and implementation, why this should be done in trustworthy ways, but, most importantly, how trustworthiness can be achieved!

  • What are your plans for the remaining part of the project?

As mentioned above, our work in the second part of the project investigates ways for evaluating, validating, and future-proofing SPATIAL tools. We want our project to have a lasting impact even after the project concludes. Again, our role is to look to social, legal and ethical implications so that we don’t just focus on technical requirements. We have just concluded the first round of interviews with some of our partners, and the next phase of this work will be developing a framework that supports the evaluation process. After that, we will test the framework on our use cases. The outcomes of this evaluation should support the demonstration activities in WP5. Beyond that, we also want to share best practices for evaluation with the broader AI development community in the EU and beyond.