Hopp til hovedinnhold
robothånd peker finger mot menneskehånd som peker finger tilbake
Foto: Getty Images

Studying human-to-computer bias transference

Autonomous AI systems sometimes make unreasonable decisions. When the same unreasonable decision is recurring systematically it is labelled Machine Bias. 

Forfatter
Tore Pedersen

Medforfatter
Johanna Johansen, Christian Johansen

Publikasjon
AI & SOCIETY: Journal of Knowledge, Culture and Communication


Machine biases tend to be attributed to non-representative datasets or inadequate machine training, or a combination of these. Alternative explanations have largely escaped attention. For instance that machine biases could originate from programmers – that is, from people who write the initial codes.

Tore Pedersen (ONH), Johanna Johansen (UiO/IFI) and Christian Johansen (UiO/IFI) chose to explore alternative explanations from the multidisciplinary perspectives of Behavioural Computer Science (Pedersen et al., 2016) and Behavioural Artificial Intelligence (Pedersen and Johansen, 2019). 

From this point of departure they arrived at the hypothesis that machine biases could also originate from the sociocultural backgrounds of the people who write the initial codes, for instance their cultural values and preferences. 

Results from the experiments show that people from different cultural backgrounds tend to inadvertently transfer their respective values and preferences to a computer program during a coding process scenario. 

Abstract

It is generally agreed that one origin of machine bias is resulting from characteristics within the dataset on which the algorithms are trained, i.e., the data does not warrant a generalized inference. We, however, hypothesize that a different ‘mechanism’ may also be responsible for machine bias, namely that biases may originate from (i) the programmers’ cultural background, including education or line of work, or (ii) the contextual programming environment, including software requirements or developer tools.

Combining an experimental and comparative design, we study the effects of cultural and contextual metaphors, and test whether each of these are ‘transferred’ from the programmer to the program, thus constituting a machine bias. Our results show that (i) cultural metaphors influence the programmer’s choices and (ii) contextual metaphors induced through priming can be used to moderate or exacerbate the effects of the cultural metaphors.

Our studies are purposely performed with users of varying educational backgrounds and programming skills stretching from novice to proficient.

Read the article
 


Tilbake til toppen