Marta Choroszewicz:

Examining power dynamics materialized in AI methods–based technologies

I participated in the networking event AI and Society, organized by researchers from DataLit, the FCAI Society, the University of Helsinki, Aalto University and Tampere University. In the panel discussion, we discussed the question, “What are the key issues for AI in society?

Below is a summary of the points that I mentioned in my brief talk during this panel discussion.

As a sociologist working at the intersection of research on professions, feminist theories, science and technology studies, I focus on questions of power and social inequality related to building, deploying and using data-driven technologies, including those based on AI methods, as well as on the uses of healthcare and social data to build these technologies. More broadly, I am interested in studying the impacts of data-driven technologies on society in terms of equality, diversity, justice and inclusion/exclusion.

Power dynamics in the production and deployment of AI methods–based technologies

To be able challenge the current techno-utopian narratives of AI that frame AI methods–based technologies as magical, objective and neutral (e.g. see Elish and Boyd, 2018), we need more research on the production, deployment and uses of these technologies to develop critical literacy around them. Such research explores important questions, for example, about the labour that is devoted to creating and using data, some of which are addressed by the project Data Literacy for Responsible Decision-Making and by other scholars (e.g. D’Ignazio and Klein, 2020; Gitelman, 2013; Hand, 2020). Some other studies highlight issues of diversity and power dynamics around the building of data-based technologies (Choroszewicz and Alastalo, 2021), including those based on AI methods (Costanza-Chock, 2020). Unpacking the organizational and professional hierarchies among experts engaged in building these technologies enables us to understand the ways in which particular values, interests and visions are materialized in these technologies. Through this research, we enhance our understanding of how these technologies are embedded within the social contexts of specific countries, institutions and communities of experts.

D’Ignazio and Klein (2020) propose a powerful framework to investigate unequal distribution of power and labour in the making and use of data. It consists of seven principles: 1) examine power, 2) challenge power, 3) elevate emotion and embodiment, 4) rethink binaries and hierarchies, 5) embrace pluralism, 6) consider context and 7) make labour visible. Furthermore, Sasha Costanza-Chock (2020) advocates for the active inclusion of currently marginalized communities in the design and development of technologies.

Enhancing our understanding of the risks, harms and oppression caused by AI methods–based technologies

We know that the current AI methods–based technologies reinforce categorical thinking and the classification of individuals into simple categories (Honkela, 2017), which is reductive and disempowering and leads to harms and to the oppression of specific groups of people. Also, experiments in using these technologies are often conducted on disadvantaged communities, which have limited resources to detect the harms (e.g. see Eubanks, 2017; O’Neil, 2016). Thus, it is important to learn from the research on the harms and oppression caused by these technologies to better understand the conditions surrounding these technologies and their limits. This is central not only to building better technologies but also to creating a socially sustainable socio-legal system that would protect vulnerable social groups against harms, errors and discrimination.

We also need a shift in theorizing from rather technical perspectives on algorithmic biases and fairness toward more structural approaches to social injustice or systems of oppression (e.g. see D’Ignazio and Klein, 2020). These technologies are built, deployed and function at the level of institutions and, therefore, they are a part of the institutional power that these institutions exercise. The current algorithmic fairness frameworks treat social categories such as gender and race as fixed attributes and, thus, they fail to adequately account for the socially constructed nature of these social categories (Hanna et al., 2019). Instead, there is a need for new frameworks that better conceptualize and operationalize these social categories for the particular socio-cultural context and field of application in which these technologies are applied.

What kind of society with AI methods–based technologies do we want to build?

Despite the hype around AI, the currently existing AI methods–based technologies are examples of so-called narrow AI—technologies that are designed to perform relatively simple tasks such as matching, sorting, risk prediction, and speech and face recognition, and there are prominent ethical problems associated with their functioning (Buolamwini & Gebru, 2018; Gillingham, 2016; Keddell, 2014). Yet, they are increasingly applied to automate often complex tasks in public administration of the welfare state, including access to social benefits and child protection in some countries.

We are in a critical phase of shaping the future of AI methods–based technologies and their role in society. Therefore, much research and public discussion is needed to determine what kind of society we want to build with AI methods–based technologies and what kind of tasks are delegated to these technologies as well as who makes these decisions. If we want to develop and use AI methods–based technologies as a force for good to bridge social inequalities and empower marginalized groups instead of reproducing or amplifying already existing social inequalities, we must change the strategy. We need to move away from the current emphasis on efficiency and cost-effectiveness in building and using these technologies. Instead, we need to think critically and holistically as well as work collaboratively to build the whole complex socio-legal system with carefully selected AI methods–based technologies, adequate social institutions, policies, and laws to protect people from the unintended negative consequences of these technologies. This system must also enable the rejection of some technologies when they do not work.

References

Buolamwini, J, & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, 77–91. PMLR. https://proceedings.mlr.press/v81/buolamwini18a.html

Choroszewicz, M, & Alastalo, M. (2021). Organisational and professional hierarchies in a data management system: Public–private collaborative building of public healthcare and social services in Finland. Information, Communication & Society. DOI: 10.1080/1369118X.2021.1942952.

Costanza-Chock, S. (2020). Design justice. Community-led practices to build the worlds we need. The MIT Press.

D’Ignazio, C, & Klein, LF. (2020). Data feminism. The MIT Press

Elish, MC, & Boyd, D. (2018). Situating methods in the magic of Big Data and AI. Communication Monographs 85(1): 57–80.

Eubanks, V. (2017). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

Gillingham, P. (2016). Predictive risk modelling to prevent child maltreatment and other adverse outcomes for service users: Inside the ‘black box’ of machine learning. British Journal of Social Work 46(1): 1044–1058.

Gitelman, L. (2013). “Raw data” is an oxymoron. The MIT Press.

Hand, D. (2020). ​Dark data. Why what you don’t know matters.​ Princeton University Press.

Hanna, A, Denton, E, Smart, A, & Smith-Loud, J. (2019) Towards a critical race methodology in algorithmic fairness. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain. ACM, New York, NY, USA. https://doi.org/10.1145/3351095.3372826

Honkela, T. (2017). Rauhankone – Tekoälytutkijan testamentti. Gaudeamus.

Keddell, E. (2015). The ethics of predictive risk modelling in the Aotearoa/New Zealand child welfare context: Child abuse prevention or neo-liberal tool? Critical Social Policy 35(1): 69–88.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.