A new study under the SPATIAL project investigates the role diversity plays in artificial intelligence (AI) development teams and how it affects the trustworthiness of AI systems (as understood through the AI HLEG ethics guidelines for trustworthy AI [1]). This study aims to understand if and how diverse teams consider aspects of trustworthy AI differently from teams with more internal similarities.

Background

Many AI development teams and organizations lack diversity, especially in US contexts [2], [3]. As a result, only a limited number of viewpoints, generally from majority groups in society, are included in the development process. This might make it difficult to be attentive to, consider, and prevent the different ways that AI systems can harm people [4], [5].

From earlier research into diversity in organizations, we know that a diversity climate is necessary to take advantage of the value of workplace diversity [6], [7]. Diversity does not just cover cultural background, gender, or age. In AI development, educational background and specialization are important differences that seem to have a big influence on how problems are identified and/or addressed.

Survey study

We have developed a survey to understand how these different ideas about diversity and trustworthy AI development come together. Do you work in artificial intelligence development and/or research? Then, your insights can help us. Please fill out the survey via https://erasmusuniversity.eu.qualtrics.com/jfe/form/SV_6xGGjbwZb9eokZg

The survey takes about 8 minutes to complete. It asks questions about the diversity of your team or organization and what features of trustworthiness you implement in your AI projects.

We would greatly appreciate it if you could fill out the survey and distribute it to your contacts, team, or department who may be involved in AI development and research.

Results

When the survey and the analysis are completed, a summary of the outcomes will be presented as a follow-up!

The study’s results will be processed anonymously, and the researcher will take great care of privacy and confidentiality.

[Perhaps you can add a line here/link to articles about women in the AI domain that you published previously for Women’s Day?]

References

[1]          AI HLEG, ‘Ethics guidelines for trustworthy AI’, Apr. 2019. Accessed: Apr. 26, 2023. [Online]. Available: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

[2]          B. Adams and F. Khomh, ‘The Diversity Crisis of Software Engineering for Artificial Intelligence’, IEEE Software, vol. 37, no. 5, pp. 104–108, Sep. 2020, doi: 10.1109/MS.2020.2975075.

[3]          S. M. West, M. Whittaker, and K. Crawford, ‘Discriminating systems: Gender, race, and power in AI’, AI Now Institute, Apr. 2019. [Online]. Available: https://ainowinstitute.org/discriminatingsystems.html

[4]          S. Fazelpour and M. De-Arteaga, ‘Diversity in sociotechnical machine learning systems’, Big Data & Society, vol. 9, no. 1, pp. 1–14, Jan. 2022, doi: 10.1177/20539517221082027.

[5]          A. A. H. de Hond, M. M. van Buchem, and T. Hernandez-Boussard, ‘Picture a data scientist: a call to action for increasing diversity, equity, and inclusion in the age of AI’, Journal of the American Medical Informatics Association, vol. 29, no. 12, pp. 2178–2181, Dec. 2022, doi: 10.1093/jamia/ocac156.

[6]          J. Hofhuis, K. I. Van Der Zee, and S. Otten, ‘Social Identity Patterns in Culturally Diverse Organizations: The Role of Diversity Climate’, Journal of Applied Social Psychology, vol. 42, no. 4, pp. 964–989, 2012, doi: 10.1111/j.1559-1816.2011.00848.x.

[7]          P. F. McKay and D. R. Avery, ‘Diversity Climate in Organizations: Current Wisdom and Domains of Uncertainty’, in Research in Personnel and Human Resources Management, vol. 33, in Research in Personnel and Human Resources Management, vol. 33. , Emerald Group Publishing Limited, 2015, pp. 191–233. doi: 10.1108/S0742-730120150000033008.