Tag Archives: artificial intelligence

Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles

Title: Killer robots, humanoid companions, and super-intelligent machines: The anthropomorphism of AI in South African news articles

Authors: Susan Brokensha and Thinus Conradie, University of the Free State.

Ensovoort, volume 42 (2021), number 6: 3

Abstract

How artificial intelligence (AI) is framed in news articles is significant as framing influences society’s perception and reception of this emerging technology. Journalists may depict AI as a tool that merely assists individuals to perform a variety of tasks or as a (humanoid) agent that is self-aware, capable of both independent thought and creativity. The latter type of representation may be harmful since anthropomorphism of AI not only generates unrealistic expectations of this technology but also instils fears about the technological singularity, a hypothetical future in which technological growth becomes unmanageable. To determine how and to what extent the media in South Africa anthropomorphise AI, we employed framing theory to conduct a qualitative content analysis of articles on AI published in four South African online newspapers. We distinguished between social anthropomorphism, which frames AI in terms of exhibiting a human form and/or human-like qualities and cognitive anthropomorphism, which refers to the tendency to conflate human and machine intelligence. Most articles reflected the social anthropomorphism of AI, while a few framed it only in terms of cognitive anthropomorphism. Several reflected both types of anthropomorphism. Based on the findings, we concluded that anthropomorphism of AI may hinder the conceptualisation of the epistemological and ethical consequences inherent in this technology.

Keywords: Artificial intelligence; Framing theory; News articles; Cognitive anthropomorphism; Social anthropomorphism

1. Introduction

1.1 Social and cognitive anthropomorphism of artificial intelligence

Anthropomorphism describes our inclination to attribute human-like shapes, emotions, mental states and behaviours to inanimate objects/animals, and depends neither on the physical features nor on the ontological status of those objects/animals (Giger, Piçarra, Alves‐Oliveira, Oliveira and Arriaga, 2019:89). Artificial intelligence (AI), including computers, drones and chatbots, can be anthropomorphised regardless of material dissimilarities between these technologies and humans, and despite the absence of an evolutionary relationship (Giger et al., 2019:89). In this study, we examine both the social and cognitive anthropomorphism of AI, where the former process ascribes human traits and forms to AI (cf. Giger et al., 2019:112), and the latter designates the expectation that AI mimics human intelligence (Mueller, 2020:12). We base this distinction on our datasets, which reflect a focus either on the anthropomorphic form and/or human-like qualities of AI, especially when framing human-robot interaction, or on cognitive processes when describing the intelligence of machine learning and deep learning, for instance. Several articles reflect both cognitive and social anthropomorphism (see Section 4).

Cognitive anthropomorphism saturates news coverage of both weak and strong AI; that is when AI is framed as merely simulating human thinking or as matching human intelligence (Bartneck, 2013; Damiano and Dumouchel, 2018:5; Salles, Evers and Farisco, 2020). The penchant to conflate artificial and human intelligence is unsurprising, historically speaking. Watson (2019:417) traces the practice especially to Alan Turing’s (1950) eponymous Turing Test for determining whether a machine can ‘think’. Since then, technology experts and laypeople have framed AI in epistemological terms, constructing it as capable of thinking, learning, and discerning. Human intelligence is notoriously resistant to easy definition, and AI might be more challenging still (Kaplan and Haenlein, 2020). Consequently, when humans envision AI, human intelligence offers a ready touchstone (cf. Cave, Craig, Dihal, Dillon, Montgomery, Singler and Taylor, 2018:8; Kaplan and Haenlein, 2019:17). In part, the propensity to anthropomorphise AI in cognitive and/or social terms derives from speculative fiction (Salles et al., 2020:91). However, many AI researchers also employ anthropomorphic descriptions (Salles et al., 2020:91). Salles et al. (2020:91) suggest that the practice is driven by “a veritable inflation of anthropocentric mental terms that are applied even to non-living, artificial entities” or to “an intrinsic epistemic limitation/bias” on the part of AI scholars. They also speculate that anthropomorphism could stem from the human need to both understand and control AI in order to experience competence (Salles et al., 2020:91). We propose that journalists too are motivated to anthropomorphise AI to understand and control it, particularly because it is an emerging and therefore uncertain science: “people are more likely to anthropomorphize when they want to […] understand their somewhat unpredictable environment” (Salles et al., 2020:89-90).

When news media anthropomorphise AI, one epistemological consequence is the risk of exposing the public to exaggerated or erroneous claims (cf. Proudfoot, 2011; Samuel, 2019; Watson, 2019). To appraise this risk, we used framing theory to conduct a content analysis of anthropomorphism in articles published in four SouthAfricana newspapers, namely, the Citizen, the Daily Maverick, the Mail & Guardian Online, and the Sowetan LIVE. We addressed the following research questions:

Research question 1:   What were the most salient topics in the coverage of AI?

Research question 2:  How was AI anthropomorphised?

Our analysis does not intend to, in Salles et al.’s (2020: 93) words, defend “moral human exceptionalism”. Instead, we are interested, from an ontological vantage, in AI-human differences. Moreover, we wish to interrogate the potential epistemological and ethical impacts of anthropomorphic framing of AI for public consumption. Therefore, we question how news media constitute and interrelate ‘humans’ and ‘machines’. This undertaking is important. How we anthropomorphise AI compels us to re-evaluate how we conceptualise human and artificial intelligence (cf. Curran, Sun and Hong, 2019). For the second research question, we honed our interest in the nature of anthropomorphic framing in South African news articles, instead of unpacking why coverage might differ across the outlets. Providing a methodical and nuanced account of inter-outlet variability exceeds the purview of this study; however, in future, we plan to question this variability by attending, among other things, to the agenda of each outlet, its target audience, and gatekeeping by editorial boards.

2. Framing theory and anthropomorphising AI

Media framings of AI shape its public reception and perception. Unlike technology experts who are au fait with the architecture of AI, the public rely on mediatised knowledge to learn about this technology (Vergeer, 2020:375). Consequently, the agendas of media outlets and the writers they employ inflect what is learned. Framing is also influenced by variables including the pressure to increase ratings and readership (Obozintsev, 2018:12) and by what Holguín (2018:5) terms the “myth-making of journalists” and the “public relations strategies of scientists”. Given this gamut of variables, how and the extent to which AI is reported varies within and across media outlets. Nevertheless, the findings of some studies concur. For example, in a study of how AI is represented in mainstream news articles in North America, the researchers found that the media generally depict AI technologies as being beneficial to society and as offering solutions to problems related to areas including health and the economy (Sun, Zhai, Shen and Chen, 2020:1). United Kingdom-based scholars echo this picture of AI as a problem-solving technology (Brennen, Howard and Nielsen, 2018) as do Garvey and Maskal (2020), who completed a sentiment analysis of the news media on AI in the context of digital health. A study of the Dutch press by Vergeer (2020) reports a balance of positive and negative sentiments. Fast and Horvitz (2017) examined The New York Times’ coverage of AI over a 30-year period, and found that despite a broadly positive outlook, framings have grown increasingly pessimistic over the last decade, citing loss of control over AI as a concern. Within the literature, few studies of anthropomorphic framing of AI by journalists have been conducted, although several studies of the framing of AI in general touch upon anthropomorphism (Garvey and Maskal, 2020; Ouchchy, Coin and Dubljević, 2020; Vergeer, 2020; Bunz and Braghieri, 2021). A few studies indicate that anthropomorphism of AI is common in news articles that focus specifically on human-robot interaction (Bartneck, 2013; Złotowski, Proudfoot, Yogeeswaran and Bartneck, 2015).

Framing theory has proven fruitful for ascertaining what journalists and other writers for online news elect to foreground and background when writing on AI (Brennan, Howard and Nielsen, 2018; Obozintev, 2018; Chuan, Tsai and Cho, 2019; Vergeer, 2020). It entails “the process of culling a few elements of perceived reality and assembling a narrative that highlights connections among them to promote a particular interpretation” (Entman, 2010:36). Frames selectively define a specific problem in terms of its costs and benefits, alleging causes of the problem, make moral judgements about the agents or forces involved, and offer solutions (Entman, 2010:336).

Our deductive analysis of framing combines Nisbet’s (2009) typology with that proposed by Jones (2015) (Table 1). Using existing frames circumvents what Hertog and McLeod (2001:150) decry as “one of the most frustrating tendencies in the study of frames and framing, the tendency for scholars to generate a unique set of frames for every study”. However, Nisbet’s (2009) coding scheme engages how science is broadly framed in public discourse, rather than spotlighting AI. Therefore, we amalgamate it with Jones’s (2015) exhaustive analysis of news articles about AI. From Nisbet’s (2009) typology of frames, we omitted the ‘scientific and technical uncertainty’ frame. Instead, we retained Nisbet’s ‘social progress’ frame and employed Jones’s (2015) ‘competition’ frame. We propose that these competing frames may be evoked simultaneously by a journalist to reflect uncertainty about the various facets of AI: “[t]he alternation between different perspectives, with an apparently contradictory identification in the journalist’s report, contributes above all to construct an image of an emergent scientific field” (Hornmoen, 2009: 16; cf. Kampourakis and McCain, 2020:152).

All nine frames in Table 1 can be expressed through anthropomorphic tropes. For example, in ‘Call me baby’: Talking sex dolls fill a void in China’ (Sowetan LIVE,  4 February 2018), the journalist employs anthropomorphic tropes to evoke the frame of nature, referring to one doll by name (“Xiaodie”) (cf. (Keay and Graduand, 2011) and describing others as “shapely” “hot”, and “beautiful”. Similarly, in ‘Prepare for the time of the robots’ (Mail & Guardian Online, 16 February 2018), the journalist employs the frame of artifice when he anthropomorphises AI as having the potential to “outperform [humans] in nearly every job function” in the future.

Table 1: A typology of frames employed to study AI in the media

Nisbet’s (2009) coding scheme
Frame Definition
Accountability Science is framed as needing to be controlled and regulated in order to counter the risks it might pose to society and to the environment (e.g., “The human element in AI decision-making needs to be made visible, and the decision-makers need to be held to account”: the Daily Maverick, 18 July 2019).
Morality/Ethics Science is framed as reflecting moral and ethical risks (e.g., “Artificial intelligence (AI) is meant to be better and smarter than humans but it too can succumb to bias and a lack of ethics”: Weekend Argus, 8 September 2019).
Middle way A compromise position between polarised views on a scientific issue is generated (e.g., “[…] the combined forces between human and machine would be better than either alone”: News24, 22 January 2020).
Pandora’s Box Science is depicted as having the potential to spiral out of control (e.g., “[…] robots […] would take targeting decisions themselves, which could ‘open an even larger Pandora’s box’, he warned”: News24, 23 May 2013).
Social progress Science is framed as enhancing the quality of life of people in areas such as health, education, or finance and as protecting/improving the environment (e.g., “[…] AI has made the detection of the coronavirus easier”: the Daily Maverick, 7 December 2020).
 
Jones’s (2015) coding scheme
Frame Definition
Artifice AI is framed as an arcane technology in the sense that it could surpass human intelligence (e.g., “[…] AI may soon surpass [human intelligence] due to superior memory, multi-tasking ability, and its almost unlimited knowledge base”: IOL, 18 December 2020).
Competition AI is framed in terms of depleting human and/or material resources (e.g., “[…]  advancements in the tech world mean [AI technologies] are coming closer to replacing humans”: the Sowetan LIVE, 31 July 2018).
Nature AI is framed in terms of the human-machine relationship and often entails romanticising AI or describing/questioning its nature/features (e.g., a robotic model called ‘Noonoouri’ “describes herself as cute, curious and a lover of couture”: the Sowetan LIVE, 20 September 2018).

3. Methods

3.1 Sample

Jones’s (2015:20) approach to data gathering informs our qualitative content analysis, because we “[mimicked] the results that a person (or machine) would have been presented with had they searched for the complete term ‘Artificial Intelligence’ in [popular news articles]”. We also adhered to Krippendorf’s (2013) guidelines for stratified sampling: first, we focused on collecting news articles from South African media outlets that have an online presence and that exhibit high circulations online. To determine which media outlets reach a wide readership, we used Feedspot, a content reader that allowed us to read top news websites in one place while keeping track of which articles we had read. We identified the Citizen, the Daily Maverick, the Mail & Guardian, and the Sowetan LIVE as newspapers with a high readership online. Second, concentrating on the period between January 2018 and February 2021, we collected articles from the four news outlets by searching for the term ‘artificial intelligence’. The third step involved limiting the sample to articles with a sustained focus on AI (Jones, 2015:25; Burscher, Vliegenthart and de Vreese, 2016). Ultimately, we conducted exhaustive analyses of 126 articles and discarded 260: 52 articles were collected from the Citizen, 36 from the Daily Maverick, 26 from the Mail & Guardian Online, and 12 from the Sowetan LIVE. This uneven sample matches similar studies of AI in the media (Ouchchy et al., 2020; Sun et al., 2020; Vergeer, 2020), and is rationalised by our curation process: articles with only passing allusions to AI were discarded along with advertorials, sponsored content or articles that were not text-based. Our unit of analysis was each complete article (cf. Chuan et al., 2019).

3.2 Analytic framework

As already noted, framing theory and the existing frames adumbrated in the previous section guided our inquiry. The strength of this directed approach to content analysis is that it corroborates and extends well-established theory and avoids cluttering the field with yet another idiosyncratic set of frames. Such an approach, “makes explicit the reality that researchers are unlikely to be working from the naïve perspective that is often viewed as the hallmark of naturalistic designs” (Hsieh and Shannon, 2005:1283). Of course, this approach is imperfect. Particularly, “researchers approach the data with an informed but, nonetheless, strong bias” (Hsieh and Shannon, 2005:1283). In response, we maintained an audit trail (White, Oelke and Friesen, 2012:244) and employed “thick description” (Geertz 1973) to bolster the transparency of the analysis and interpretation (Stahl and King, 2020:26).

3.3 Dominant topics and valence

We determined that unpacking how AI is anthropomorphised demands more than discerning the various frames through which this technology was represented in the data (Research question 2). It also proved necessary to examine how framing entwines with the topics that dominated our data. After all, repeated media exposure, “causes the public to deem a topic important and allows it to transfer from the media agenda to the public agenda” (Fortunato and Martin, 2016:134; cf. Freyenberger, 2013:16; McCombs, 1997:433).

After expounding how topics related to frames centred on anthropomorphism, a final step in our data analysis involved coding the overall valence of each article as positive, negative or mixed. Given the unreliability of automated content analysis for determining tone (Boukes, Van de Velde, Araujo and Vliegenthart, 2020), we manually coded each article by examining the presence of multiple keywords that reflected, amongst other things, uncertainty versus certainty and optimism versus pessimism about AI (cf. Kleinnijenhuis, Schultz and Oegema, 2015).

4. Findings

4.1 Salient topics and valence

Two topics prevailed across the four media outlets: ‘Business, finance, and the economy’ and ‘Human-AI interaction’. Each appeared in 18.25% of all articles. The second most salient topic was ‘Preparedness for an AI-driven world’, which featured in 13.49%. ‘Healthcare and medicine’ was next and received coverage in 11.90% of all articles. ‘Big Brother’ and ‘Control over AI’ were the fourth most prevalent topics, with each featuring in 10.31% of all articles. Less salient topics were the ‘News industry’ (3.17%) followed by the ‘Environment’, ‘Killer robots’, ‘Strong AI’, and the ‘Uncanny Valley’ (2.38%). ‘Singularity’ featured in 1.58% of all articles, while ‘Education’ was covered in 0.79% of all articles. All news outlets reported on ‘Business, finance, and the economy’, ‘Human-AI interaction’, and ‘Healthcare and medicine’. The only newspaper that omitted ‘Preparedness for an AI-driven world’ was the Sowetan LIVE. ‘Big Brother’ featured only in the Citizen, while ‘Control over AI’ was addressed in all newspapers barring the Citizen. The ‘Environment’ and the ‘News Industry’ were covered only in the Citizen and the Daily Maverick. ‘Killer robots’ appeared only in the Daily Maverick and the Mail & Guardian Online, while ‘Strong AI’ featured in the Citizen and the Mail and Guardian Online. The ‘Uncanny Valley’ was absent from the Mail & Guardian Online. Only the Daily Maverick reported on ‘Singularity’, while ‘Education’ appeared only in the Citizen.

Positive valence characterised the topics ‘Business, finance, and the economy’, ‘Education’, the ‘Environment, ‘Healthcare and medicine’, ‘Human-AI interaction’, and ‘Strong AI’. Negative valence marked ‘Big Brother’ and ‘Killer robots’. ‘Control over AI’, the ‘News Industry’, ‘Preparedness for an AI-driven world’, ‘Singularity’, and the ‘Uncanny Valley’ were coded with mixed valence.

Although this does not form part of the discussion in Section 5, we noted that nineteen articles did not reflect the use of anthropomorphic tropes and topics across these articles included ‘Preparedness for an AI-driven world’ (six articles), ‘Big Brother’ (five articles), ‘Control over AI’ (three articles), ‘Business, finance, and the economy’ (two articles), the ‘News Industry’ (two articles), and ‘Strong AI’ (one article). ‘Preparedness for an AI-driven World’, ‘Business, finance, and the economy’ were mostly coded with positive valence, while ‘Big Brother’ was coded with negative valence. The two articles on the ‘News Industry’ reflected negative and mixed-valence and ‘Control over AI’ was mostly coded with mixed valence.

4.2 Anthropomorphising AI

Dataset 1: Cognitive anthropomorphism. Approximately 12% of all articles in the first dataset (i.e., 16 of 126) reflected cognitive anthropomorphism. A closer reading also indicates that in articles featuring this type of anthropomorphism, the most salient topics were ‘Healthcare and medicine’, which featured in six articles, followed by ‘Business, finance, and the economy’, which was the focus of three articles. ‘Strong AI’ was covered in two articles, as was ‘Preparedness for an AI-driven world’. ‘Big Brother’, the ‘News Industry’, and ‘Singularity’ featured in one article each. When we examined how the two most salient topics were overwhelmingly framed, we noted that four articles focusing on ‘Healthcare and medicine’ were framed in terms of nature, one in terms of social progress, and one in terms of accountability. For the three articles addressing ‘Business, finance, and the economy’, the social progress frame predominated.

Dataset 2: Social anthropomorphism. Almost 47% of all articles (i.e., 59 of 126) reflected social anthropomorphism. The most prevalent topics were ‘Human-AI interaction’ (21 articles), followed by ‘Business, finance, and the economy’ and ‘Control over AI’ (with eight articles each). Less salient topics were ‘Preparedness for an AI-driven world’ (six articles), ‘Big Brother’ (five articles), ‘Healthcare and medicine’ (four articles), the ‘Environment’ (three articles), the ‘Uncanny Valley’ (two articles), ‘Education’ (one article), and ‘Singularity’ (one article). With respect to framing in the most salient articles, 14 articles on ‘Human-AI interaction’ evoked the frame of nature, six evoked the frame of social progress, and one reflected the frame of accountability. Three articles on ‘Control over AI’ reflected the morality/ethics frame and three evoked the frame of accountability. One article on this topic reflected the frame of nature and one evoked the frame of the competition. Five articles on ‘Business, finance, and the economy’ evoked the frame of social progress, while three on this topic each reflected the frames of competition, accountability, and nature.

Dataset 3: Cognitive and social anthropomorphism. Thirty-two articles (25.39%) reflected both types of anthropomorphism. The most salient topics were ‘Business, finance, and the economy’ (nine articles) and ‘Healthcare and medicine’ (five articles). Next, ‘Control over AI’ and ‘Human-AI’ interaction’ were the most prevalent topics with four articles each. Less prevalent topics were ‘Preparedness for an AI-driven world’ (three articles), ‘Killer robots’ (three articles), ‘Big Brother’ (two articles), the ‘News Industry’ (one article) and the ‘Uncanny Valley’ (one article). With respect to salient topics and frames, five articles on ‘Business, finance, and the economy’ evoked the frame of social progress. Three evoked the frame of nature, and one the frame of accountability. Four articles on ‘Healthcare and medicine’ reflected the frame of social progress and one evoked the frame of nature. With respect to ‘Control over AI’, the frame of nature was the dominant frame in two articles, while the accountability frame was prevalent in two. Finally, two articles on ‘Human-AI interaction’ reflected social progress as the dominant frame and two evoked the frame of nature.

5. Discussion

A detailed discussion of all three datasets exceeds the scope of this study. Instead, we foreground articles from the first two datasets, based on the most salient topics.  Space constraints aside, we noted that news articles that reflected a dominant type of anthropomorphism coincided with a sustained focus on specific types of AI, which in turn impacted the topic under discussion. Thus, articles that accented cognitive anthropomorphism also topicalised AI technologies that simulate human cognition, including machine learning and neural networks. This finding rationalises our decision to discuss cognitive anthropomorphism of these technologies in relation to ‘Healthcare and medicine’ and ‘Business, finance, and the economy’, not only because these were the most salient topics in the first dataset, but also because both sectors demand types of AI that augment human thinking. The second dataset, where social anthropomorphism prevailed, essentially focused on AI-driven digital assistants/social robots and on human engagement with these technologies. We, therefore, explicate the topics ‘Human-AI interaction’, ‘Business, finance, and the economy’, and ‘Control over AI’, which were the most prevalent topics in this dataset. We did, however, review the third dataset, and noted that the findings mirrored those identified in the first two datasets.

5.1 Articles in which cognitive anthropomorphism predominated

All 16 articles in which cognitive anthropomorphism predominated also struck sensational and/or alarmist tones, where sensational reporting was “entertainment-oriented” or “tabloid-like” (Uribe and Gunter, 2007:207; cf. Vettehen and Kleemans, 2018:114) and alarmist reporting framed AI as warranting fear (cf. Ramakrishna, Verma, Goyal and Agrawal, 2020:1558). Typically, these articles portrayed technology as equalling or rivalling human intelligence. To illustrate, “A team […] taught an artificial intelligence system to distinguish dangerous skin lesions from benign ones” (the Daily Maverick, 29 May 2018), and “A computer programme […] learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert” (the Citizen, 9 May 2018). Interestingly, the writers of these articles also deployed various discursive strategies to mitigate an alarmist and/or sensational tone. Rather than relying solely on their own reporting, journalists commonly moderated exaggerated claims about machine intelligence by quoting or paraphrasing sceptical scholars/experts and other AI stakeholders, or by simply enclosing key terms in scare quotes (cf. Schmid-Petri and Arlt, 2016:269). Journalists were thus able to maintain authorial distance from potentially false or overstated claims (cf. Johannson, 2019:141). Indeed, in 12 of the 16 articles, we noted that journalists built their articles predominantly on quotations and/or paraphrases of various actors’ voices. In Johannson’s (2019:138) view, constructing news reports around quotations enables, “journalistic positioning […] based on the detachment of responsibility”. In ‘Will your financial advisor be replaced by a machine?’ (the Citizen, 10 March 2018), for instance, although the journalist reported that “Technology has the ability […] to […] analyse a full array of products, potentially identifying suitable [financial] solutions” and that it “can process and analyse all kinds of data far quicker and more accurately than humans”, he also cited an industry expert as predicting that “the human element” will remain. Doing so enabled him to maintain distance from claiming that artificial intelligence can outpace humans.

Another strategy that several of the journalists in our dataset adopted to attenuate alarmist/sensational claims about the cognitive abilities of AI was to frame this technology in contradictory terms. This was particularly evident in articles centred on AI in the healthcare industry. In ‘Could AI beat humans in spotting tumours?’ (the Citizen, 22 January 2020), for example, a statement such as “Machines can be trained to outperform humans when it comes to catching breast tumours on mammograms” was followed by referencing a study, which highlighted AI’s flaws and misdiagnoses. Similarly, in ‘AI better at finding skin cancer than doctors’ (the Daily Maverick, 29 May 2018), the journalist reported that according to researchers, “A computer was better than human dermatologists at detecting skin cancer”; yet the journalist also quoted a medical expert as stating that “there is no substitute for a thorough clinical examination”. Citing contradictions around AI represents one option in the range of strategies journalists can leverage to resolve the uncertainty and conflict surrounding this novel technology (cf. Hornmoen, 2009:1; Kampourakis and McCain, 2020:152). They may also disregard any uncertainties and simply treat scientific claims as factual (Peters and Dunwoody, 2016:896). However, this strategy only surfaced in two of the 16 articles in which cognitive anthropomorphism was apparent. For example, in ‘Wits develops artificial intelligence project with a Canadian university to tackle Covid-19 in Africa’ (the Daily Maverick, 6 December 2020), the journalist quoted an academic as claiming that, in the fight against COVID-19, “Artificial intelligence is the most advanced set of tools to learn from the data and transfer that knowledge for the purpose of creating realistic modelling”.

Depicting AI in terms of competing interpretations that allow journalists to manage scientific uncertainty is typical of post-normal journalism, which blurs the boundaries between journalism and science (Brüggemann, 2017:57-58). Coined by Funtowicz and Ravetz (1993), post-normal science reflects high levels of uncertainty, given that the phenomena under investigation are characterised as “novel”, “complex” and “not well understood” (Funtowicz and Ravetz, 1993:87). AI is, undoubtedly, a contested technology. Some praise its power to evenly distribute social and economic benefits, while others decry its ontological threat to humanity (Ulnicane, Knight, Leach, Stahl and Wanjiku, 2020:8-9). Knowledge about AI remains limited and disputed. Unsurprisingly, then, journalists may generate “a plurality of perspectives” (Brüggemann, 2017:58).

By citing competing frames, journalists can “balance […] conflicting views” (Skovsgaard, Albæk, Bro and De Vreese, 2013:25) on a given topic and encourage readers to formulate judgements independently. Thus, in ‘Big data a game-changer for universities’ (the Mail & Guardian Online, 25 July 2019), readers must decide for themselves whether they support the view that AI is “capable of predicting lung cancer with greater accuracy than highly trained and experienced radiologists” or whether they believe that “humans are indispensable” in the detection of lung cancer. This particular example indicates that employing competing frames remains flawed. In this respect, Boykoff and Boykoff (2004:127) contend that the balance norm could constitute a false balance in that journalists may “present competing points of view on a scientific question as though they had equal scientific weight when actually they do not”.[1] This false balance may confuse readers and hinder their ability to distinguish fact from fiction (Brüggemann, 2017:57-58). Research suggests that the public resist competing frames because they neutralise each other, complicating the process of taking a position on a particular issue (Sniderman and Therialt, 2004:139; cf. Chong & Druckman, 2012:2; Obozintsev, 2018:15). Consider the mixed messages in ‘X-rays and AI could transform TB detection in South Africa, but red tape might delay things’ (the Daily Maverick, 13 December 2020). A layperson would be hard-pressed to reconcile the claim made by the World Health Organisation that, “the diagnostic accuracy and the overall performance of [AI-driven] software were similar to the interpretation of digital chest radiography by a human reader” with the view expressed by an expert from the Radiological Society of South Africa that this software requires human oversight. Significantly, what was omitted from this article, and from most articles centred on healthcare, was an account of why AI for disease detection requires human input. Instead, journalists merely reported that AI-driven diagnostic tools could err and require enormous datasets to enhance accuracy.

Indeed, five of the six healthcare articles in our dataset described AI as outperforming humans in the detection, interpretation or prediction of diseases. What is absent from these articles is the fact that human and artificial intelligence cannot be conflated: “Seeking to compare the reasoning of human and artificial intelligence (AI) in the context of medical diagnoses is an overly optimistic anthropomorphism” argues David Burns (2020:E290) in the Canadian Medical Association Journal. This position is premised on the observation that machine learning algorithms, which are employed in computer-based applications to support the detection of diseases, are mathematical formulae that are unable to reason, and so they are not intelligent. Quer, Muse, Nikzad, Topol and Steinhubl (2017:221) echo this argument, asserting that there is no explanatory power in medical AI: “It cannot search for causes of what is observed. It recognizes and accurately classifies a skin lesion, but it falls short in explaining the reasons causing that lesion and what can be done to prevent and eliminate disease”. Furthermore, although AI mimics human intelligence, it requires vast archives of data to ‘learn’. By contrast, humans can learn through simple observation. A good example is a scenario in which a human learns to recognise any given object after observing it only once or twice. AI software would need to view the object repeatedly to recognise it, and even then, it would be unable to distinguish this object from a new object (Pesapane et al., 2020:5).

The healthcare articles in our dataset that reflected cognitive anthropomorphism also failed to address ethical issues surrounding medical AI. A key ethical issue pertains to the consequences of using algorithms in healthcare. In ‘Could AI beat humans in spotting tumours?’ (the Citizen, 22 January 2020), the journalist reported on a deep learning AI model designed to detect breast tumours[2], quoting a medical doctor and researcher as stating that experts are unable to explain why the model ‘sees’ or ‘overlooks’ tumours: “At this point, we can observe the patterns […]. We don’t know the ‘why’”. This constitutes the so-called “black-box problem” (Castelvecchi, 2016:1), which arises when the processes between the input and output of data are opaque. Put differently, computers are programmed to function like neural networks (that are supposedly superior to standard algorithms), but as is the case with the human brain, “[i]nstead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher” (Castelvecchi, 2016:1). Problematically, this entails that while doctors can interpret the outcomes of an algorithm, they cannot explain how the algorithm made the diagnosis (cf. Durán and Jongsma, 2021:1; cf. Gerke, Minssen and Cohen, 2020:296), which generates a host of ethical problems: “Can physicians be deemed responsible for medical diagnosis based on AI systems that they cannot fathom? How should physicians act on inscrutable diagnoses?” (Durán and Jongsma, 2021:1). On an epistemological level, we should be concerned, not only about biased algorithms but also about the degree to which black-box algorithms could damage doctors’ epistemic authority (Durán and Jongsma, 2021:1). Most of the articles in our dataset acknowledged that medical AI requires huge volumes of data to accurately screen for diseases, but overlooked such ethical and epistemic concerns. Additionally, the articles omitted any discussion of the potential for algorithmic biases related to race, gender, age, and disabilities, among others (Gerke et al., 2020:303-304). While several articles reported that AI can misdiagnose diseases, they overlooked arguments that inaccuracies may stem from the fact that algorithms are usually trained on Caucasian patients, instead of diverse patient data, thereby exacerbating health disparities (Adams, 2020:1). To educate the public about medical AI’s benefits and potential ethical and social risks, Ouchchy et al. (2020:927) suggest a multifaceted approach which, “could include increasing the accessibility of correct information to the public in the form of fact-sheets” and collaborating with AI ethicists to improve public debate.

Articles on ‘Business, finance and the economy’ that used cognitive anthropomorphism were mainly framed in terms of social progress. This finding is not unexpected, since applications of AI in business, finance, and the economy are generally associated with benefits including increased economic wealth, greater productivity and efficiency (cf. Vergeer, 2020:377). Nevertheless, journalists also struck an alarmist and/or sensational tone by claiming that AI can emulate human intelligence to make independent financial decisions (the Citizen, 23 May 2018; 23 October 2019). Alarmist and/or sensational coverage of AI may, “maximize ratings, readership, clicks, and views”; yet it may also retard public understanding of such technologies (Lea, 2020:329). As was the case in articles featuring healthcare and medicine, journalists focusing on personal finance framed AI in contradictory terms, reporting, for example, that while AI either matches or exceeds human intelligence, it will not replace human financial advisors in the near future: “The machine can emulate, but it can’t innovate – yet” (The Citizen, 23 October 2019). As already indicated, couching AI in contradictory terms may help journalists resolve uncertainty about this novel technology. On the other hand, such framings also risk befuddling the public, as noted earlier (cf. Brüggemann, 2017:57-58). Claims that AI will either mimic or rival human intelligence without replacing humans are unhelpfully vague and might obstruct public confidence in and perceptions of the technology (cf. Cave et al., 2018:2).

A 2018 article in The Guardian quotes Zachary Lipton, a machine learning expert based at Carnegie Mellon University, as lamenting that, “as […] hyped-up stories [about AI] proliferate, so too does frustration among researchers with how their work is being reported on by journalists and writers who have a shallow understanding of the technology” (Schwartz, 2018:4). Although several articles in our dataset made vague assurances that AI cannot yet substitute human intelligence, none engaged rigorously with arguments among AI scholars and industry experts that artificial general intelligence (AGI) might remain unrealised (Bishop, 2016; Fjelland, 2020; Lea, 2020: 324). A 2021 book that offers interesting insights into AI’s so-called superintelligence is The myth of artificial intelligence by Erik Larson, who asserts that the scientific aspect of the AI myth is based on the assumption that we will achieve AGI as long as we make inroads in the area of weak or narrow AI. However, “[a]s we successfully apply simpler, narrow versions of intelligence that benefit from faster computers and lots of data, we are not making incremental progress” (Larson, 2021:2; cf. Lea. 2020:323). In fact, creating an algorithm for general intelligence, “will require a major scientific breakthrough, and no one currently has the slightest idea what such a breakthrough would even look like” (Larson, 2021:2). In Watson’s (2019: 417) view, drawing on anthropomorphic tropes to conflate human intelligence and AI is misleading and even dangerous. Supposing that algorithms share human traits implies that “we implicitly grant [AI] a degree of agency that not only overstates its true abilities but robs us of our own autonomy” (Watson, 2019:434).

5.2 Articles in which social anthropomorphism predominated

The 59 articles in which social anthropomorphism featured mirrored the above-mentioned discursive strategies. Generally, journalists blended their own reports with strategically selected quotes and paraphrases from AI researchers and technology experts. Scare quotes also registered a note of scepticism, and journalists continued to frame AI in contradictory terms. Anthropomorphic framing of social robots or digital assistants shaped articles on human-AI interaction as well as many articles on business/financial issues, pointing to the human tendency to regard AI-driven technologies as social actors (Duffy and Zawieska, 2012), despite an awareness that they are inanimate (Scholl and Tremoulet, 2000). This is predictable, given that creators of social robots and digital assistants rely on anthropomorphic design to enhance acceptance of and interaction with them (Fink, 2012:200; cf. Darling, 2015:3). Adopting a predominantly pro-AI stance, most journalists in our dataset portrayed social robots/digital assistants through the frames of nature and social progress, imbuing them with a human-like form and/or human-like behaviours. For example, with respect to having a human-like form, several social robots/digital assistants were described as exhibiting “human likeness” (Daily Maverick, 10 November 2019), “remarkable aesthetics” (the Citizen, 5 September 2018) and “complex facial expressions” (the Sowetan LIVE, 4 February 2018). With respect to human-like traits, journalists variously described AI as “capable of handling routine tasks” (the Daily Maverick, 9 May 2018), as “a colleague or personal assistant – in the human sense of the term” (the Mail & Guardian, 26 August 2019), and as offering emotional or mental support in the workplace (the Sowetan LIVE, 30 November 2020).  Quoting or paraphrasing industry experts, one writer of a Sowetan LIVE article (30 November 2020) actually claimed that AI surpasses humans’ capacity to function as assistants or companions: “[AI] doesn’t judge you and doesn’t care about your race or class or gender. It gives you non-biased responses”. The reality is that if AI systems are trained on a biased dataset, they will replicate bias (Borgesius, 2018:11).

With respect to bias, we noted that six Daily Maverick and Mail & Guardian Online articles in which social anthropomorphism was apparent briefly addressed AI bias and its ethical impact on society. This finding aligns with Ouchchy et al.’s (2019), who notes that media coverage of ethical issues surrounding AI is broadly realistic, but superficial (cf. Ouchchy et al., 2020:1). Typical utterances in these articles humanised AI algorithms, as evidenced in: “AI algorithms […] will reflect and perpetuate the contexts and biases of those that create them” (the Mail & Guardian Online, 8 January 2018), “Fix AI’s racist, sexist bias” (the Mail & Guardian Online, 14 March 2019), and “[…] machines, just like humans, discriminate against ethnic minorities and poor people” (the Daily Maverick, 16 October). Epistemologically speaking, these utterances assign moral agency to AI. This misleading belief about AI’s capabilities detracts from debates around policies that need to address and prevent algorithmic bias on the part of humans (cf. Salles et al., 2020:93; Kaplan, 2015:36).

A few journalists focusing on human-AI interaction and on business/financial issues framed AI and human attributes as nearly indistinguishable. In such articles, journalists claimed that AI technologies possess, “human-sounding voice[s] complete with ‘ums’ and ‘likes’ (the Daily Maverick, 9 May 2018) and that they “can be programmed to […] chat with customers and answer questions” (the Sowetan LIVE, 2 March 2018). Of course, AI-driven technologies are limited to predetermine responses (Heath, 2020: 4), which Highfield (2018:3) terms, “canned responses to fixed situations that give humans a sense that the [AI] is alive or capable of understanding [them]”. An examination of the dataset indicated that several journalists mitigated claims about AI’s ability to imitate human traits and behaviours through contradictory views that also evoked the frame of nature. Thus, in ‘Chip labour: Robots replace waiters in restaurant’ (the Mail & Guardian Online, 5 August 2018), although the journalist described a “little robotic waiter” as wheeling up to a table and serving patrons with food, he also employed the frame of nature to emphasise its “mechanical tones”. Similarly, in ‘Is your job safe from automation? (the Sowetan LIVE, 20 March 2018), the journalist referred to a humanoid robot as being able “to recognise a voice, principal human emotions, chat with customers and answer questions”, but also averred that “Robots have no sense of emotion or conscience”. Although the ‘Uncanny valley’ is not discussed here because it was not a salient topic (featuring in only three articles), we noted that a few journalists referred to AI-driven robots as “uncanny” (the Daily Maverick, 10 November 2019) and “eerie” (the Sowetan LIVE, 8 March 2018). These references acknowledge the uncanny valley, “the point at which something nonhuman has begun to look so human that the subtle differences left appear disturbing” (Samuel, 2019:12). This has prompted robot designers to produce machines that are easily distinguishable from human appearance.

Only four journalists focusing on human-AI interaction or on business/financial issues expressed a negative or mixed stance on AI by questioning its ability to emulate human emotion and sentience. In a Sowetan LIVE article (17 May 2018), for example, the journalist questioned what the future would hold were robots to prepare our meals or care for our children. The journalist evoked the frame of nature to insist that “Robots can’t replace human love, laughter and touch”. In ‘Is your job safe from automation’ (the Sowetan LIVE, 20 March 2018), the journalist observed that AI “lack[s] empathy” and has “no conscience”. These arguments align with expert conclusions that AI-driven robots cannot possess sentience/consciousness (cf. Hildt, 2019). Put differently, AI remains emotionally unaware and, as Kirk (2019:3) argues, even if we train AI to recognise emotions, humans programme the labelling and interpretation process.

Reasons for attributing a human form and/or human-like attributes to AI are speculative and vary across the literature. Still, adopting a psychological explanation, Epley, Waytz and Caciappo (2007) propose that people tend to anthropomorphise a non-human agent when they possess insufficient knowledge about the agent’s mental model when they need to understand and control it, or from a desire to form social bonds (cf. Złotowski et al., 2015:348). Scholars are divided over whether or not anthropomorphism in the context of social robots/digital assistants should concern us. Turkle (2007, 2010), for instance, argues that human-robot interaction undermines authentic human relationships and exploits human vulnerabilities, while Breazeal (2003) is of the view that social robots may be useful to humans as helpmates and social companions. As far as benefits are concerned, Darling (2015:9) observes that “[s]tate of the art technology is already creating compelling use cases in health and education, only possible as a result of engaging people through anthropomorphism”. We argue that while the benefits of social robots/digital assistants should not be dismissed, anthropomorphising AI does have several potentially negative consequences (Sparrow and Sparrow, 2006; Bryson, 2010; Hartzog, 2015). We have already touched on the idea that anthropomorphism may dupe people into believing that AI systems are human-like (cf. Kaplan, 2015:36). This concern is echoed by Engstrom (2018:19), who cautions that humanising AI may cause society to raise its expectations of this technology’s capabilities while ignoring its social, economic, and ethical consequences. In articles focused on human-AI interaction and business/financial issues, we noted that journalists either reported the risks of AI in a superficial manner or omitted them entirely. Thus, for example, in ‘AI tech to assist domestic abuse victims’ (the Citizen, 23 November 2018), an AI-driven programme that is accessed via Facebook’s Messenger, was described as “a companion” that is “non-judgmental”, but the ethical risks around this AI-mental healthcare interface remained unaddressed. Using AI applications for mental healthcare has several ethical concerns that have been widely discussed in the literature (Riek, 2016; Fisk, Henningsen and Buyx, 2019; Ferretti, Ronchi and Vayena, 2019). Some of these concerns revolve around possible abuse of the applications (in the sense that they could replace established healthcare professionals and thus widen healthcare inequalities), privacy issues, and the role and nature of non-human therapy in the context of vulnerable populations (Fiske et al., 2019). In “‘Call be baby”: Talking sex dolls fill a void in China’ (the Sowetan LIVE, 4 February 2018), the journalist employed derogatory female framing, describing “sex dolls that can talk, play music and turn on dishwashers” for “lonely men and retirees”. While the journalist conceded that “On social media, some say the products reinforce sexist stereotypes”, this observation ended the interrogation of sexism. Across the four media outlets – and quoting AI developers’ own words – journalists described AI companions or assistants as female, “endowed with remarkable […] aesthetics” (the Citizen, 5 September 2018), as “lean” or ‘slender, with dark flawless skin” etc. (the Sowetan LIVE, 28 September 2018). These descriptions echo mass media proclivities for framing human-AI relationships in terms of stereotypical gender roles instead of questioning such representations (cf. Döring and Poesch, 2019:665). The fact is that most AI-driven companions/assistants are designed according to stereotypical femininity (cf. Edwards, Edwards, Stoll, Lin and Massey, 2019). Informative journalism should challenge the entrenchment of these stereotypes that often “[come] with framing [AI] in human terms” (Darling, 2015:3). Another interesting example of how journalists may frame ethical concerns related to the application of AI is reflected in ‘Online chatbot suspended for hate speech, “despising” gays and lesbians’ published in the Citizen (1 January 2021). The article reports on ‘Lee Luda’, a chatbot who was recently ‘accused’ of hate speech after ‘attacking’ minorities online. Of significance is that although the journalist indicated that the chatbot “learned” from data taken from billions of conversations, this fact was backgrounded in favour of foregrounding the chatbot’s human-like behaviour; according to her designers, “Lee Luda is […] like a kid just learning to have a conversation. It has a long way to go before learning many things”. Emphasising the chatbot’s supposed ability to learn to avoid generating hate speech inadvertently frames this technology as having human intentions and moral agency, which are myths. AI does not possess intentionality (Abbass, 2018:165), which Searle defines as “that property of many mental states and events by which they are directed at or about or of objects and states of affairs in the world”. Without intentionality to act freely, AI does not have moral agency (Van de Poel, 2020:387).

Social anthropomorphism featured in eight articles on ‘Human control over AI’ in the Daily Maverick, the Mail & Guardian Online, and the Sowetan LIVE. The anthropomorphic tropes, which were evoked mainly through the frames of accountability and morality/ethics, typically reflected a mixed-valence in its propositions that humans must regulate AI. Concerns were related mainly to controlling or curtailing algorithmic/data bias (particularly as this related to racist and sexist bias), autonomous weapons, and job losses. With respect to bias, and in a Daily Maverick article (3 October 2019), the journalist made the claim in the lead that “AI can end up very biased”, but nevertheless repeatedly averred that AI is designed by humans and trained on datasets selected by humans. Similarly, in a Sowetan LIVE article (30 January 2021), the journalist described AI as “dangerous” and “prone to errors”, but mainly topicalised the development of AI software “by Africans, for Africans” that helps combat privacy violations and discrimination. Both journalists, therefore, checked the tendency to frame AI as a moral agent that exhibits autonomous decision-making processes, thus mitigating fears and unfounded expectations about this technology’s capabilities (cf. Salles et al., 2020:93). With respect to autonomous weapons, although the journalist in a Daily Maverick article (3 December 2019) referred to “killer robots”, she foregrounded the need for the international community to protect societies from “machines [that] can’t read between the lines or operate in the grey zone of uncertainty”. Regarding job lay-offs, a Daily Maverick journalist wrote an article (26 November 2020) in which he predicted that AI will ultimately take human jobs that “will usher in an era of techno-feudalism”. Yet, he also mitigated this prediction by arguing that humans need to ensure that they regulate AI. It is not surprising that across the eight articles, the words “(human) control”/“controls” frequently appeared in relation to algorithmic/data bias, autonomous weapons, and job losses: studies by Ouchchy et al. (2020) and Sun et al. (2020) suggest that regulation of AI is a frequent topic in the media amidst fears of the ethical consequences of this technology.

5.3 A comparison of the news outlets with regard to topics and anthropomorphism of AI

Whether journalists published in a mainstream paper, such as the Mail & Guardian Online, an alternative media outlet, such as the Daily Maverick, or in tabloid-style newspapers, such as the Citizen or the Sowetan LIVE, all of them employed similar strategies to reflect uncertainty and conflict surrounding AI and its applications. Articles typically combined journalists’ own reports, scare quotes, direct and indirect speech of different actors, and contradictory framing of AI. Unanimously, all outlets also anthropomorphised AI. The topics of ‘AI-human interaction’, ‘Healthcare and medicine’, and ‘Business, finance, and the economy’ featured across all four outlets, with anthropomorphic framing of AI under the first two topics being uniform across the outlets. AI was overwhelmingly framed positively and depicted as exhibiting human-like form/human traits or as mimicking human cognitive capabilities. Although articles published in the Sowetan LIVE also anthropomorphised AI when discussing business, finance, and the economy, these articles were coded with mixed-valence, while articles published in the other newspapers were predominantly coded with positive valence. We eschew speculation as to why this was the case, given that between 2018 and the beginning of 2021, we identified only three articles in this newspaper that focused on AI and business/financial issues. Indeed, only 12 Sowetan LIVE articles satisfied our data collection criteria, suggesting that AI’s application in the business/financial world is an unpopular topic among readers.  ‘Control’ over AI’ featured in the Daily Maverick, the Mail & Guardian Online as well as in the Sowetan LIVE, and again, the anthropomorphism of AI under this topic was uniform: AI was described as biased, as taking people’s jobs and as having the ability to kill humans. With a sensational soubriquet like ‘Killer robots’, one might assume that any reports on autonomous weapons would be the purview of tabloid-style newspapers, but this topic appeared only in three articles in the Daily Maverick and the Mail & Guardian Online. Despite some sensational/alarmist claims such as the Mail & Guardian Online’s (19 March 2018)  – “[…] weapons may be able to learn on their own, adapt and fire” – journalists questioned the potential for AI to progress to a level where it will have moral agency and demanded that this type of AI be banned. Another topic that has the potential to be sensationalised is ‘Big Brother’. So, it is unsurprising that it appeared 13 times in the Citizen. Five of the 13 articles did not anthropomorphise AI as a ‘spy’ but highlighted the human element that drives surveillance technology. As noted in Section 4.1, the only news outlet that did not cover ‘Preparedness for an AI-driven world’ was the Sowetan LIVE. Since this topic was generally framed around the need for South Africans to equip themselves with the skills necessary to cope with AI which is ‘taking’ people’s jobs, we find the omission of this topic surprising, given that the newspaper’s readership constitutes mainly working-class South Africans. The remainder of the topics reflected in our datasets is not discussed, since they constituted less than 4% of the entire dataset.

6. Conclusions

This study has revealed that anthropomorphism of AI was pervasive in the four South African online newspapers, with only 19 reflecting no anthropomorphic tropes. Most articles (59) elicited social anthropomorphism of AI, while a few (16) evoked cognitive anthropomorphism. A total of 32 reflected both types of anthropomorphism. When cognitive anthropomorphism was evoked, journalists typically portrayed AI as matching or exceeding human intelligence and when social anthropomorphism was elicited, AI technologies were typically framed as social actors. Whichever type of anthropomorphism was dominant, AI was overwhelmingly represented as benefitting humankind. Although journalists generally attempted to mitigate exaggerated claims about AI by using a variety of discursive strategies, the construction of anthropomorphic tropes to some extent overtook the reality of what AI technologies currently encompass, essentially obscuring these technologies’ epistemological and ethical challenges. It is critical that journalists interrogate how they contextualise and qualify AI, given that it is disrupting almost every aspect of our lives.

While the content analysis yielded insights into how AI is framed by the media in South Africa, a limitation of the study is that the sample is not necessarily representative of the anthropomorphic framing employed in other online news outlets that may feature different or more polarised views of AI. Nevertheless, Obozintsev (2018: 45) observes that “it seems unlikely that artificial intelligence would be framed in a  markedly different manner” in other outlets, since “opinions about [AI] are not as politically divisive as scientific issues such as climate change and evolution”, for example.

References

Abbass, H.A. 2019. Social integration of artificial intelligence: Functions, automation allocation logic and human-autonomy trust. Cognitive Computation 11: 159–171.

Adams, K. 2020. 3 hospital execs: How to ensure medical AI is trained on sufficiently diverse patient data. Becker’s Health IT, 30 November. Available:  https://www.beckershospitalreview.com/artificial-intelligence/3-hospital-execs-how-to-ensure-medical-ai-is-trained-on-sufficiently-diverse-patient-data.html (Date of access: 9 April 2021).

Bartneck, C. 2013. Robots in the theatre and the media. Design and Semantics of Form and Movement: 64–70.

Birkenshaw, J. 2020. What is the value of firms in an AI world? Pp. 23-35 in J. Canals and F. Heukamp (Eds.), The future of management in an AI world. USA: Palgrave Macmillan.

Springer International Publishing.

Bishop, J.M. 2016. Singularity, or how I learned to stop worrying and love artificial intelligence. Pp. 267-281 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.

Borgesius, F.Z. 2018. Discrimination, artificial intelligence, and algorithmic decision-making.

Available: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decisionmaking/1680925d73 (Date of access: 3 March 2021).

Boukes, M., Van de Velde, B., Araujo, T. and Vliegenthart, R. 2020. What’s the tone? Easy doesn’t do it: Analyzing performance and agreement between off-the-shelf sentiment analysis tools. Communication Methods and Measures 14(2): 83-104.

Boykoff, M. and Boykoff, J. 2004. Balance as bias: Global warming and the US prestige press. Global Environmental Change 14(2): 125-136.

Breazeal, C. 2003. Toward sociable robots. Robotics and Autonomous Systems 42(3): 167-75.

Brennen, J.S., Howard, P.N. and Nielsen, R.K. 2018. An industry-led debate: How UK media cover artificial intelligence. RISJ Fact-Sheet. Oxford, UK: University of Oxford.

Brüggemann, M. 2017. Post-normal journalism. Climate journalism and its changing contribution to an unsustainable debate. Pp. 57-73 in P. Berglez, U. Olausson and M. Ots (Eds.), What is sustainable journalism? Integrating the environmental, social, and economic challenges of journalism. New York, NY: Peter Lang.

Bryson, J. 2010. Robots should be slaves. Pp. 63-74 in Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam: John Benjamin Publishing Company.

Bunz, M. and Braghieri, M. 2021. The AI doctor will see you now: Assessing the framing of AI in news coverage. AI & SOCIETY: 1-14.

Burns, D.M. 2020. Artificial intelligence isn’t. Canadian Medical Association Journal 192(11): E290-E290.

Burscher, B., Vliegenthart, R. and Vreese, C.H.D. 2016. Frames beyond words: Applying cluster and sentiment analysis to news coverage of the nuclear power issue. Social Science Computer Review 34(5): 530-545.

Castelvecchi, D. 2016. Can we open the black box of AI?. Nature News 538(7623): 20-23.

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B. and Taylor, L. 2018. Portrayals and perceptions of AI and why they matter. Available: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf  (Date of access: 2 February 2020).

Chong, D. & Druckman, J.N. 2012. Counterframing effects. Journal of Politics 75(1): 1-16.

Chuan, C.H., Tsai, W.H.S. and Cho, S.Y. 2019. Framing artificial intelligence in American newspapers. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society: 339-344.

Colom, R., Karama, S., Jung, R.E. and Haier, R.J. 2010. Human intelligence and brain networks. Dialogues in Clinical Neuroscience 12(4): 489-501.

Curran, N.M., Sun, J. and Hong, J.W. 2019. Anthropomorphizing AlphaGo: A content analysis of the framing of Google DeepMind’s AlphaGo in the Chinese and American press. AI & SOCIETY 35: 727-735.

Damiano, L. and Dumouchel, P. 2018. Anthropomorphism in human–robot co-evolution. Frontiers in Psychology 9: 1-9.

Darling, K. 2015. ‘Who’s Johnny? Anthropomorphic framing in human-robot interaction, integration, and policy. Pp. 3-21 in P. Lin, G. Bekey, K. Abney and R. Jenkins (Eds.), Robotic Ethics 2.0. Oxford: Oxford University Press.

Döring, N. and Poeschl, S. 2019. Love and sex with robots: A content analysis of media representations. International Journal of Social Robotics 11(4): 665-677.

Duffy, B.R. and Zawieska, K. 2012. Suspension of disbelief in social robotics. 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): 484-89.

Durán, J.M. and Jongsma, K.R., 2021. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics 0: 1-7.

Edwards, C., Edwards, A., Stoll, B., Lin, X. and Massey, N. 2019. Evaluations of an artificial intelligence instructor’s voice: Social Identity Theory in human-robot interactions. Computers in Human Behavior 90: 357-362.

Engstrom, E. 2018. Gendering of AI/robots: Implications for gender equality amongst youth generations. The report was written by Eugenia Novoa (Speaker), Siddhesh Kapote (Speaker) Ebba Engstrom (Speaker), Jose Alvarez (Speaker) and Smriti Sonam (Special Rapporteur). Images provided by AFI Changemakers and UNCTAD Youth Summit Delegates 2018. Available: https://arielfoundation.org/wp-content/uploads/2019/01/AFI

Changemakers-and-UNCTAD-Delegates-Report-on-Technology-2019.pdf#page=13 (Date of access: 8 January 2021).

Entman, R.M. 2010. Framing media power. In: P. D’Angelo and J. Kuypers (eds.), Doing news framing analysis. New York, NY: Routledge. 331-355.

Epley, N., A. Waytz, and J. T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychological Review 114(4): 864–886.

Erickson RP. 2014. Are humans the most intelligent species?. Journal of Intelligence 2(3): 119-121.

Fast, E. and Horvitz, E. 2017. Long-term trends in the public perception of artificial intelligence. In: Proceedings of the AAAI Conference on Artificial Intelligence 31(1): 963-969.

Ferretti, A., Ronchi, E. and Vayena, E. 2019. From principles to practice: Benchmarking government guidance on health apps. The Lancet Digital Health 1(2): e55-e57.

Fjelland, R., 2020. Why general artificial intelligence will not be realized. Humanities and Social Sciences Communications 7(1): 1-9.

Fiske, A., Henningsen, P. and Buyx, A. 2019. Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research 21(5): e13216.

Fortunato, J.A. and Martin, S.E. 2016. The intersection of agenda-setting, the media environment, and election campaign laws. Journal of Information Policy 6(1): 129-153.

Freyenberger, D. 2013. Amanda Knox: A content analysis of media framing in newspapers around the world. Available: Available: http://dc.etsu.edu/cgi/viewcontent.cgi, article=2281&context=etd (Date of access: 22 February 2021).

Funtowicz, S.O. and Ravetz, J.R. 1993. Science for the post-normal age. Futures 25(7): 739-755.

Garvey, C. and Maskal, C. 2020. Sentiment analysis of the news media on artificial intelligence does not support claims of negative bias against artificial intelligence. Omics: A Journal of Integrative Biology 24(5): 286-299.

Geertz, C. 1973. The interpretation of cultures: Selected essays. New York, NY: Basic Books.

Gerke, S., Minssen, T. and Cohen, G. 2020. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare: 295-336.

Giger, J.C., Piçarra, N., Alves‐Oliveira, P., Oliveira, R. and Arriaga, P. 2019. Humanization of robots: Is it really such a good idea?. Human Behavior and Emerging Technologies 1(2): 111-123.

Heath, N. 2020. What is AI? Everything you need to know about artificial intelligence. ZDNet, 11 December. Available: https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/ (Date of access: 19 April 2021).

Hertog, J. and McLeod, D. 2001. A multi-perspectival approach to framing analysis: A field guide. Pp. 141-162 in S. Reese, O. Gandy and A. Grant (Eds.), Framing public life. Mahwah, NJ: Erlbaum.

Highfield, V. 2018. Can AI really be emotionally intelligent? Alphr, 27 June. Available: https://www.alphr.com/artificial-intelligence/1009663/can-ai-really-be-emotionally-intelligent/ (Date of access: 19 April 2021).

Hildt, E. 2019. Artificial intelligence: Does consciousness matter?. Frontiers in Psychology 10: 1-3.

Holguín, L.M. 2018. Communicating artificial intelligence through newspapers: Where is the real danger?. Available: https://mediatechnology.leiden.edu/images/uploads/docs/martin-holguin-thesis-communicating-ai-through-newspapers.pdf (Date of access: 3 April 2020).

Hornmoen, H. 2009. What researchers now can tell us: Representing scientific uncertainty in journalism. Observatorio 3(4): 1-20.

Hsieh, H.F. and Shannon, S.E. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15(9): 1277-1288.

Johannson M. 2019. Digital and written quotations in a news text: The hybrid genre of political news opinion. Pp. 133-162 in P.B. Franch and P.G.C. Blitvich, P.G.C. (Eds.), Analyzing digital discourse: New insights and future directions. Cham, Switzerland: Springer.

Jones, S. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta.

Kampourakis, K. and McCain, K. 2020. Uncertainty: How it makes science advance. USA: Sheridan Books, Incorporated.

Kaplan, A. and Haenlein, M. 2019. Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Business Horizons 62(1): 15-25.

Kaplan, A. and Haenlein, M. 2020. Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons 63(1): 37-50.

Kirk, J. 2019. The effect of Artificial Intelligence (AI) on emotional intelligence (EI). Capgemini, 19 November. Available: https://www.capgemini.com/gb-en/2019/11/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei/ (Date of access: 5 May 2021).

Kleinnijenhuis, J., Schultz, F. and Oegema, D. 2015. Frame complexity and the financial crisis: A comparison of the United States, the United Kingdom, and Germany in the period 2007–2012. Journal of Communication 65(1): 1-23.

Krippendorff, K. 2013. Content analysis: An introduction to its methodology. Los Angeles, CA: Sage.

Larson, E.J. 2021. The myth of artificial intelligence: Why computers can’t think the way we do. Cambridge, MA: Harvard University Press.

Lea, G.R. 2020. Constructivism and its risks in artificial intelligence. Prometheus 36(4): 322-346.

McCombs, M. 1997. Building consensus: The news media’s agenda-setting roles. Political Communication 14(4): 433-443.

Monett, D., Lewis, C.W. and Thórisson, K.R. 2020. Introduction to the JAGI special issue “On defining Artificial Intelligence” – Commentaries and author’s response. Journal of Artificial General Intelligence 11(2): 1-100.

Mueller, S.T. 2020. Cognitive anthropomorphism of AI: How humans and computers classify images. Ergonomics in Design 28(3): 12-19.

Nelson, T.E. and Kinder, D.R. 1996. Issue frames and group-centrism in American public opinion. The Journal of Politics 58(4): 1055-1078.

Nisbet, M.C. 2009. Framing science. A new paradigm in public engagement. Pp. 1-32 in L. Kahlor and P. Stout (Eds.), Understanding science: New agendas in science communication. New York, NY: Taylor and Francis.

Nisbet, M.C. 2016. The ethics of framing science. Pp. 51-74 in B. Nerlich, R. Elliott and B. Larson (Eds.), Communicating biological sciences. USA: Routledge.

Obozintsev, L. 2018. From Skynet to Siri: An exploration of the nature and effects of media coverage of artificial intelligence. Unpublished Doctoral thesis. Newark, Delaware: University of Delaware.

Ouchchy, L., Coin, A. and Dubljević, V. 2020. AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY 35(4): 927-936.

Pesapane, F., Tantrige, P., Patella, F., Biondetti, P., Nicosia, L., Ianniello, A., Rossi, U.G., Carrafiello, G. and Ierardi, A.M. 2020. Myths and facts about artificial intelligence: Why machine and deep-learning will not replace interventional radiologists. Medical Oncology 37(5): 1-9.

Peters, H.P. and Dunwoody, S. 2016. Scientific uncertainty in media content: Introduction to this special issue. Public Understanding of Science 25(8): 893–908.

Peters, M.A. and Jandrić, P. 2019. Artificial intelligence, human evolution, and the speed of learning. Pp. 195-206 in J. Knox, Y. Wang and M. Gallagher (Eds.), Artificial intelligence and inclusive education. Perspectives on rethinking and reforming education. Singapore: Springer.

Proudfoot, D. 2011. Anthropomorphism and AI: Turingʼs much misunderstood imitation game. Artificial Intelligence 175(5-6): 950-957.

Quer, G., Muse, E.D., Nikzad, N., Topol, E.J. and Steinhubl, S.R. 2017. Augmenting diagnostic vision with AI. The Lancet 390(10091): 221.

Ramakrishna, K., Verma, I., Goyal, M.I. and Agrawal, M.M. 2020. Artificial intelligence: Future employment projections. Journal of Critical Reviews 7(5): 1556-1563.

Riek, L.D. 2016. Robotics technology in mental health care. Pp. 185-203 in D.D. Luxton (Ed.), Artificial intelligence in behavioral and mental health care. USA: Academic Press.

Salles, A., Evers, K. and Farisco, M. 2020. Anthropomorphism in AI. AJOB Neuroscience 11(2): 88-95.

Samuel, J.L. 2019. Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress 10(2): 8-26.

Schmid-Petri, H. and Arlt, D. 2016. Constructing an illusion of scientific uncertainty? Framing climate change in German and British print media. Communications 41(3): 265-289.

Scholl, B.J. and Tremoulet, P.D. 2000. Perceptual causality and animacy. Trends in Cognitive Sciences 4(8): 299-309.

Schwartz, O. 2018. “The discourse is unhinged”: How the media gets AI alarmingly wrong”. The Guardian, 25 July. Available: https://www.theguardian.com/technology/2018/jul/25/ai-artificial-intelligence-social-media-bots-wrong (Date of access: 14 April 2021).

Skovsgaard, M., Albæk, E., Bro, P. and De Vreese, C. 2013. A reality check: How journalists’ role perceptions impact their implementation of the objectivity norm. Journalism 14(1): 22-42.

Sniderman, P.M. and Theriault, S.M. 2004. The structure of political argument and the logic of issue framing. Pp. 133-165 in W.E. Saris and P.M. Sniderman (Eds.), Studies in public opinion: Attitudes, nonattitudes, measurement error, and change. USA: Princeton University Press.

Sparrow, R., and L. Sparrow. 2006. In the hands of machines? The future of aged care. Minds and Machines 16(2): 141–161.

Stahl, N.A. and King, J.R. 2020. Expanding approaches for research: Understanding and using trustworthiness in qualitative research. Journal of Developmental Education: 44(1): 26-29.

Sun, S., Zhai, Y., Shen, B. and Chen, Y. 2020. Newspaper coverage of artificial intelligence: A perspective of emerging technologies. Telematics and Informatics 53: 1-9.

Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433-460.

Turkle, S. 2007. Simulation vs. authenticity. Pp. 244-247 in J. Brockman (Ed.), What is your dangerous idea? : Today’s leading thinkers on the unthinkable. USA: Simon & Schuster.

Turkle, S. 2010. In good company? On the threshold of robotic companions. Pp. 3-10 in Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues. Amsterdam/Philadelphia: John Benjamins Publishing Company.

Uribe, R. and Gunter, B. 2007. Are sensational news stories more likely to trigger viewers’ emotions than non-sensational news stories? A content analysis of British TV news. European Journal of Communication 22(2): 207-228.

Ulnicane, I., Knight, W., Leach. T., Stahl, B.C. and Wanjiku, W.G. 2020: Framing governance for a contested emerging technology: Insights from AI policy. Policy and Society: 1-20.

Van de Poel, I. 2020. Embedding values in artificial intelligence (AI) systems. Minds and Machines 30(3): 385-409.

Vergeer, M. 2020 Artificial intelligence in the Dutch press: An analysis of topics and trends.  Communication Studies 71(3: 373-392.

Vettehen, H.P. and Kleemans, M. 2018. Proving the obvious? What sensationalism contributes to the time spent on news video. Electronic News 12(2): 113-127.

Wang, P. 2019. On defining artificial intelligence. Journal of Artificial General Intelligence 10(2): 1-37.

Watson, D. 2019. The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines 29(3): 417-440.

White, D.E., Oelke, N.D. and Friesen, S. 2012. Management of a large qualitative data set: Establishing trustworthiness of the data. International Journal of Qualitative Methods 11(3): 244-258.

Złotowski, J., Proudfoot, D., Yogeeswaran, K. and Bartneck, C. 2015. Anthropomorphism: opportunities and challenges in human–robot interaction. International Journal of Social Robotics 7(3): 347-360.

Endnotes

[1] We are not suggesting that frames can be reduced to one of two arguments: “Frames are constructions of the issue: they spell out the essence of the problem, suggest how it should be thought about and may go so far as to recommend what (if anything) should be done […]” (Nelson and Kinder, 1996:1057).

[2] Deep learning is a subset of machine learning and learns through an artificial neural network. In simple terms, this network mimics the human brain and enables an AI model to ‘learn’ from huge amounts of data.

Friend or foe? How online news outlets in South Africa frame artificial intelligence

Title: Friend or foe? How online news outlets in South Africa frame artificial intelligence

Author: Susan Brokensha, University of the Free State.

Ensovoort, volume 41 (2020), number 7: 2

Abstract

The influence that the media have in shaping public opinion about artificial intelligence (AI) cannot be overestimated, since the various frames they employ to depict this technology may be adopted into the public’s socio-cultural frameworks. Employing framing theory, we conducted a content analysis of online news articles published by four outlets in South Africa with a view to gaining insights into how AI is portrayed in them. We were particularly interested in determining whether AI was represented as friend or foe. Our analysis indicated that although most articles reflected a pro-AI stance, many also tended to be framed in terms of both anti- and pro-technology discourse, and that this dualistic discourse was to some degree resolved by adopting a middle way frame in which a compromise position between the polarised views was proposed. The analysis also signalled that several of the articles in our dataset called for the need for human agency to regulate and govern AI in (South) Africa. This is an important call as it is in keeping with the need to ensure that AI is applied in such a way that it benefits Africa and its culture and context.

1. Introduction

Key milestones in the evolution of artificial intelligence (AI) cannot be neatly mapped, given the need to take into account not only definitive discoveries and events in AI, but also hardware innovations, software platforms, and developments in robotics, all of which have had a significant impact on AI systems. Most AI scholars would agree that a defining moment in its history was the hosting by John McCarthy (Dartmouth College) and Marvin Minsky (Harvard University) of the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) at Dartmouth College in the United States in 1956 (Haenlin and Kaplan, 2019; Lele 2019; Mondal, 2020). Working alongside Minsky, Nathaniel Rochester (IBM Corporation) and Claude Shannon (Bell Telephone Laboratories), McCarthy coined the term ‘artificial intelligence’ in a 1955 proposal for the DSRPAI. In this document, the four individuals proposed that the 1956 brainstorming session be based on “the conjecture that every aspect of learning or any feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (McCarthy, Minsky, Rochester and Shannon, 1955:2). Prior to the DSRPAI, crucial discoveries in AI included McCulloch and Pitts’ computational model of a neuron (1943) – a model which was then further developed by Frank Rosenblatt when he formulated the Perceptron learning algorithm in 1958 – and Alan Turing’s Turing Test (1950), designed to determine if a machine could ‘think’.

Since the 1950s, other significant events or innovations that have influenced AI systems are simply too numerous to summarise in one paper, but scholars such as Perez, Deligianni, Ravi and Yang (2018) helpfully describe the evolution of AI in terms of positive and negative seasons, the events and innovations just described constituting the birth of AI. The period between 1956 and 1974 is described by Perez et al. (2018:9) as AI’s first spring, which was marked by advances in designing computers that could solve mathematical problems and process strings of words (cf. De Spiegeleire, Maas and Sweijs, 2017:31). First winter refers to the period between 1974 and 1980 when the public and media alike began interrogating whether AI held any benefits for humankind amidst over-inflated claims at the time that AI would surpass human intelligence (cf. Curioni, 2018:11). Since emerging technologies (such as machine translation) did not live up to lofty expectations, AI researchers’ funding was heavily curtailed by major agencies such as DARPA (Defense Advanced Research Projects Agency). The so-called second spring is generally regarded as the period between 1980 and 1987, and was characterised by the revival of neural network models for speech recognition (cf. Shin, 2019:71). This brief cycle was replaced by AI’s second winter (1987-1993) during which desktop computers gained in popularity and threatened the survival of the specialised hardware industry (cf. Maruyama, 2020:383). The period between 1997 and 2000 is not described in terms of a particular season, but during this time, machine-learning methods such as Bayesian networks and evolutionary algorithms dominated the field of AI. The period from 2000 to the present is described by scholars as the third spring of AI, a season distinguished by big data tools (that include Hadoop, Apache Spark, and Cassandra), as well as by other emerging technologies such as cloud computing, robotics and the Internet of Things (cf. Maclure, 2019:1).

Although some authorities in the technology sector maintain that this spring will not endure owing to AI’s cyclic nature (Piekniewski, 2018; Schuchmann, 2019), others argue that it is here to stay (Bughin and Hazan, 2017; Lorentz, 2018; Sinur, 2019). Indeed, Andrew Ng, a leading expert in machine learning and author of AI Transformation Playbook (2018), is of the view that “[we] may be in the eternal spring of AI” – that “[the] earlier periods of hype emerged without much actual value created, but today, it’s creating a flood of value” (Ray, 2018:1). It appears that media outlets throughout the world, whether positively or negatively disposed towards AI, have jumped onto the AI bandwagon if newspaper headlines are anything to go by:

  • South Africa: ‘The robots are coming for your jobs’ (News24, 29 September 2016)
  • Nigeria: ‘Machine learning may erase jobs, says Yudala’ (Daily Times, 28 August 2017)
  • Brazil: ‘In Brazil, “AI Gloria” will help women victims of domestic violence’ (The Rio Times, 29 April 2019).

Meredith Broussard (2018) argues that some journalists and researchers have succumbed to technochauvism, which is the utopian belief that technology will solve all our problems. At the other extreme are those who may have exaggerated the risks that accompany AI. In this regard, robotics expert Sabine Hauert decries, amongst other things, “hyped headlines that foster fear […] of robotics and artificial intelligence” (Hauert, 2015:416). Hauert (2015:417) laments the public being faced with “a mostly one-sided discussion that leaves them worried that robots will take their jobs, fearful that AI poses an existential threat”.

In this paper, we aim to explore how South African mainstream news articles that are published online frame AI with a view to determining whether they are depicted as a friend or foe to humans. The main reason for undertaking such an exploration is that online media outlets play a critical role in not only disseminating information, but also helping the public gain insights into scientific and technological innovations (cf. Brossard, 2013: 14096). In accordance with framing theory, “the way an issue is framed and discussed through specific perspectives can influence how audiences make sense of the issue” (Chuan, Tsai and Cho, 2019:3409). A review of the literature indicates that apart from analysing how journalists have framed scientific and technological news about, for instance, biomedicine, chemistry, physics, and renewable energy (Gastrow, 2015; Kabu, 2017; Rochyadi-Reetz, Arlt, Wolling and Bräuer, 2019), scholars have not undertaken studies to determine how AI is covered by the South African online press.

2. Framing theory and AI coverage in the media

From the outset, we would like to point out that we are not claiming that there is a causal relationship between journalists’ framing of issues and society’s opinions about those issues. Instead, “media as a powerful cultural institution […] may influence [the] public’s attitudes towards an emerging technology, particularly in the early stage when most people feel uncertain, wary, or anxious about an unfamiliar yet powerful technology” (Chuan et al. 2019:340). A number of researchers have, in recent years, studied media coverage of AI (Holguín, 2018; Jones, 2018; Brennen, Howard and Nielsen 2018; Obozintsev 2018; Chuan et al. 2019; Cui and Wu 2019), and framing theory in particular appears to be useful for understanding how the media depict both utopian and/or dystopian views of this type of technology. Nisbet (2009a:51) points out that frames enable us to gain insights into “how various actors in society define science-related issues in politically strategic ways” as well as “why an issue might be important, who or what might be responsible, and what should be done”. How AI is framed “may differ substantially across outlets” (Obozintsev, 2018:65), given that the complex relationship between journalism and science is influenced by a number of variables such as “myth-making of journalists, constraints, biases, public relations strategies of scientists” (Holguín 2018:5), and the like. Some scholars and AI experts argue that media coverage of AI leaves much to be desired – that it is “bogus” and “overblown” (Siegel, 2019:1) and that the media may “resort to headlines and images that are both familiar and sensationalist” (Cave, Craig, Dihal, Dillon, Montgomery, Singler and Taylor, 2018:17). Others have noted that AI “is typically discussed as an innovation that can impact humanity in a positive way, making the lives of individuals better or easier” (Obozintsev, 2018:65). We are of the view that it is critical to explore how AI is framed in South African news outlets to obtain a better understanding of the various perspectives that will inevitably inform public opinion of this technology. This might in turn “help bridge the diverse conversations occurring around AI and facilitate a richer public dialogue” (Chuan et al., 2019:340).

Adopting an approach employed by Chuan et al. (2019) and Strekalova (2015), we differentiated between topics and frames in our dataset, acknowledging that each news article may reflect multiple topics and frames. A topic “is […] a manifest subject, an issue or event”, while a frame “is a perspective through which the content is presented” (Chuan et al., 2019:340). To determine whether AI was framed as friend or foe, we posed the following questions:

  • Research question 1: Which topics and sub-topics were prominent in widely circulated South African online news articles?
  • Research question 2: How was AI framed in widely circulated South African online news articles?

3. Methods

3.1 Sample

Since we made use of stratified sampling (Krippendorff, 2013:116), we adhered to specific strata to collect suitable online news articles. Following an approach adopted by Jones (2015), we selected online news outlets that have a very high distribution in South Africa. Using Feedspot, a site that, amongst other things, offers data curation of news sites as a guide, we chose to collect articles from The Citizen, the Daily Maverick, the Mail & Guardian, and the SowetanLIVE. Second, we used ProQuest and Lexis Nexis Academic to collect articles using the search term ‘artificial intelligence’. Like Jones (2015), we did not search for articles using the abbreviation ‘AI’ because it returned too many results, given that it is a common letter combination in English. Third, we selected or eliminated articles based on whether or not they had a sustained focus on AI. We also discarded articles such as those that were not text-based and those that were actually movie listings or letters to the editor, for example. Finally, we collected articles that were published between January 2018 and April 2020. In this way we ultimately selected 73 articles across the four news outlets.1 The unit of analysis was the entire news article.

We are mindful that a qualitative study such as this one is open to criticism since it is not possible for a researcher to distance him- or herself from the subject under investigation (Jones, 2015:26). With this is mind, we kept a detailed memo in which we compiled a thick description of our design, methodology, and analyses, and this “detailed reckoning” (Jones, 2015:26) is available for perusal.

3.2 Framework of analysis

We made use of existing frames to answer Research question 2, combining and adapting frames first proposed by Nisbet (2009b, 2016). These are the frames of social progress, the middle way, morality/ethics, Pandora’s Box, and accountability. We also employed three frames proposed by Jones (2015), namely, the frames of competition, nature, and artifice. These frames are summarised in Table 1 below.

Table 1: A typology of frames employed to study AI in the media

Frame

Definition

Social progress

The frame of social progress is evoked when journalists wish to draw attention to the benefits of AI. Nisbet (2009b) restricts this frame to improvements to quality of life, but we have expanded the definition to include benefits in other areas such as the economy, health, and education.

Competition

The competition frame reflects the threats that AI may pose and these threats pertain to job losses, automated weapons, data breaches, and the like (Jones, 2015).

Middle way

Journalists may employ a middle way frame to propose what Obozintsev (2018:86) refers to as a “third way between conflicting or polarized views of options”.

Nature

Jones (2015:32) argues that articles that evoke the frame of nature “tend to discuss our continuing relationships with current technology, question the direction that this relationship is taking, and are often couched in romantic terms. Anthropomorphism is abundant in this discourse”. Typically, the virtues of AI “are identified as superior” (Jones, 2015:32), although journalists may use the frame to pass judgement on this technology.

Artifice

The frame of artifice depicts AI as a technology that is arcane; it is perceived as a technology that will surpass us in intelligence and ultimately engulf us (Jones, 2015:37).

Morality/Ethics

The frame of morality/ethics questions the rights and wrongs as well as the thresholds and boundaries of AI in terms of issues such as data privacy, surveillance, and the development of biased algorithms (Obozintzev 2018:86).

Pandora’s Box

Also referred to by Nisbet (2009b) as Frankenstein’s monster or as runaway science, Pandora’s Box portrays AI as technology that may spiral out of control.

Accountability

Accountability frames AI as technology that requires control and regulation to prevent, for example, algorithmic bias or abuses of power. (Nisbet (2009b) refers to this frame as the frame of public accountability/governance.)

4. Findings: Research question 1

4.1 Main topics and sub-topics

Many articles in the dataset covered multiple topics, but Table 2 below provides a summary of the main topics reflected in the title and first paragraph of each article. The most popular topic in the dataset was ‘Business, finance, and the economy’ (18 articles), and under this topic, the sub-topic of AI and job losses was most prominent, followed by AI and job creation, and AI-driven technology that functions as a personal financial advisor. Less frequent sub-topics dealt with under ‘Business, finance, and the economy’ are also provided in Table 2. After this, popular topics revolved around describing ‘AI-human interaction’ in terms of the anthropomorphism of the former (12 articles) and reporting on ‘Big Brother’ as it pertains to the use of AI to surveil online users and gain access to their personal data (eight articles). The next three prominent topics (with six articles each) were ‘Healthcare and medicine’ (as they related to AI being used to detect cancer and function as doctors), ‘Human control over AI’ (in the sense of a need to control and regulate AI), and ‘South Africa’s preparedness for an AI-driven world’, which reflected concerns about the country’s AI skills shortage within the context of the fourth industrial revolution. Three articles each were devoted to the topics of the ‘Environment’ (food production, crop management, and ecology), the ‘News industry’ (deepfaking and the curation of news by AI), and the ‘Uncanny valley’ (considered in the discussion section of this paper). ‘Defence weapons’ (so-called ‘killer robots’), ‘Singularity’ (which describes a hypothetical future in which technology will become uncontrollable), and ‘Strong AI’(which describes the goal of some in the field of AI to create machines whose intellectual capability will match that of human beings) featured in two articles each. Finally, one article in the dataset focused on ‘Education’ (specifically on robot teachers) and one on ‘Cyborgs’. The latter topic was not filed under the ‘Uncanny valley’ because it was an outlier in the dataset; it revolved around an individual who had a cybernetic implant attached to the base of his skull and no other article featured human enhancement with in-the-body AI technology.

All news outlets covered ‘Business, finance, and the economy’, ‘AI-human interaction’, ‘Healthcare and medicine’, and ‘South Africa’s preparedness for an AI-driven world’. ‘Big Brother’ was addressed only in articles published by The Citizen as were ‘Cyborgs’, ‘Education’, and ‘Strong AI’. Both the Daily Maverick and the Mail & Guardian reported on ‘Defence weapons’ and ‘Human control over AI’. AI in the ‘News industry’ was covered by The Citizen and the Daily Maverick. The ‘Uncanny valley’ featured in both The Citizen and the SowetanLIVE. AI as it relates to the ‘Environment’ received attention in The Citizen, the Daily Maverick, and the SowetanLIVE. The topic of ‘Singularity’ was addressed only in Daily Maverick articles. Apart from concluding that it is not surprising that the topic of ‘Business, finance, and the economy’ dominated as this dominance has been detected in other studies on how AI is framed in the media (cf. Chuan et al., 2019), we hesitate to draw any specific conclusions from this data. One reason for this is that, as stated at the beginning of this section, most articles reflected multiple topics. Another reason is that some of the titles in the dataset were quite misleading. One article published in The Citizen, for example, was entitled ‘Ramaphosa becomes first head of state to appear as hologram’ (5 July 2019), but the article itself considered South Africa’s preparedness to cope with the fourth industrial revolution including AI and robotics.

Table 2: Topics and sub-topics in the dataset (n=73 articles)

Main topic

Sub-topic(s) if applicable

Number of articles

AI-human interaction

(Robot companions/colleagues/assistants)

12

Big Brother

(AI surveillance/data privacy, including discussions of ethics/algorithmic bias)

8

Business, finance, and the economy

AI and job losses

AI and job creation

AI as financial advisors

AI to help businesses grow/become more efficient

AI to assist in human resources (including discussions of ethics/algorithmic bias)

AI as insurance brokers

6

3

3

4

1

1

Cyborgs

Human enhancement with in-the-body, AI-driven technology

1

Defence weapons

(Automated weapons)

2

Education

(AI teachers)

1

Environment

AI to improve food and crops

AI to improve the ecosystem

2

1

Healthcare and

medicine

AI diagnosticians

AI doctors

5

1

Human control over AI

(The importance of human agency in the development and implementation of AI)

6

News industry

AI and deepfaking

AI as curating the news (including discussions of ethics/algorithmic bias)

2

1

Singularity

Human identity under singularity

Human beings in a post-work world

1

1

South Africa’s preparedness for an AI-driven world (AI skills shortage)

(Training of human beings in an AI-driven world)

6

Strong AI

(AI modelled on the human brain)

2

Uncanny valley

(Appearance of AI as human-like or robot-like)

3

5. Findings: Research question 2

5.1 The frames of social progress and competition

An exhaustive analysis of all articles in terms of whether the frame of social progress or the frame of competition was more salient indicates that across all four news outlets, 50.68% of the articles focused on social progress, while 15.06% addressed competition (Table 3). Since the frame of social progress reflects the benefits that AI holds for humankind, while the frame of competition reflects the risks and threats inherent in AI, we concluded that most of the articles across the outlets were positively disposed towards AI, a conclusion supported by other studies (Obozintsev, 2018; Cui and Wu, 2019; Garvey and Maskal, 2019). One important reason for considering the salience of the two frames is that in terms of the distribution of these frames by source, both frames were employed in 54.79% of the articles (Table 4), a phenomenon which is considered in detail in the discussion section. We will return to the frames of social progress and competition just before the discussion section when we have considered all frames in context.

Table 3: Most salient frame by source

Newspaper

Social progress

Competition

Middle way

Nature

Artifice

Morality/

Ethics

Pandora’s Box

Accounta-

bility

The Citizen

(n=33)

19

4

0

2

0

6

0

2

Daily Maverick

(n=16)

8

1

0

0

0

3

0

4

Mail & Guardian

(n=14)

6

3

0

1

0

0

0

4

Sowetan-LIVE

(n=10)

4

3

0

3

0

0

0

0

Table 4: Distribution of frames in the dataset (without taking salience into account)

News outlet

Social progress

Competi-

tion

Social progress and competition

(in which the middle way was employed)

Middle way (in which the social progress and compe-tition frames were not evoked)

Nature

Artifice

Morality/

Ethics

Pando-ra’s

Box

Accounta-bility

The Citizen

(n=33)

8

8

14

(middle way=10)

3

22

6

9

1

2

Daily Maverick

(n=16)

2

4

10

(middle way=6)

1

12

8

6

1

6

Mail & Guardian

(n=14)

2

2

10

(middle way = 7)

4

10

6

5

1

7

Sowetan-LIVE

(n=10)

2

2

6

(middle way=3)

0

9

4

1

0

1

Middle way frame used in 34 articles

5.2 The middle way frame and its presence or absence in articles employinng the frames of social progress and competition

In her study of the framing of AI in news articles (n=64), Obozintsev (2018:40) found that only 3.1% were framed in terms of a middle way frame, but in our dataset, nearly 35.61% of all articles that employed both the frames of social progress and competition reflected this frame (Table 4): with regard to the 14 articles published in The Citizen that evoked both frames, ten employed a middle way frame. Out of the ten articles in the Daily Maverick that used the two frames, six employed a middle way frame, and out of the ten Mail & Guardian articles that used both frames, seven employed a middle way frame. The SowetanLIVE dataset contained six articles employing the two frames, and three of these evoked a middle way frame. The possible reasons why a middle way frame was constructed in these articles is considered in the discussion section.

5.3 The remainder of the frames

Although the frame of nature was not one that was made salient in most articles – it featured as a prominent frame in only six articles – it was nevertheless employed in 72.60% of all articles, which is not surprising, given that it is popular in the media to question or embrace the potential for AI to match or surpass human intelligence and to interrogate its capacity to form bonds with human beings. The frame of artifice did not appear at all as a salient frame, although it was used in 25.47% of all articles. The frame of morality/ethics was a salient frame in nine articles only, but it did occur in 28.76% of the articles in the dataset. Pandora’s Box did not feature as a salient frame, but was employed in 4.10% of the articles. Accountability is a frame that Obozintsev (2018) reports was rare in her dataset (7.8%), but we found that this was used as a salient frame in 13.69% of the articles under investigation, while it was touched upon in 21.91% of all articles. We speculate that its salience in particular articles was partly due to the fact that these articles also reflected the frame of morality/ethics and/or Pandora’s Box, frames which typically question issues of control and power. In addition, the writers of these articles included academics, computer scientists, mechanical engineers, and social commentators, all of whom have a vested interest in ethical issues around AI and in technology that leverages AI for the public good.

5.4 Revisiting social progress and competition in relation to all frames

Although a pro-AI stance was evident across the news outlets when we examined the salience of the frames of social progress and competition, we also had to consider the prevalence of this stance based on (1) an analysis of all frames and (2) a close reading of each article. One could be forgiven for concluding that, based on the data in Table 4, it is possible that most articles did not in fact reflect a pro-AI stance, given that frames outside of the social progress frame may reflect negative views of AI. However, an exhaustive analysis of each article allowed us to negate this conclusion. The analysis indicated, for example, that across the dataset, the frame of nature was evoked in 53 articles. In 35 of these articles, AI was depicted in positive terms, 15 reflected a negative view, and three were neutral (in the sense that they did not adopt a specific tone). For every article, we tracked each frame and determined if, overall, AI was portrayed in a positive, negative or neutral light. We concluded that 41 articles (56.16%) reflected a positive view of AI, 29 (39.72%) conveyed a negative view, and three (4.1%) were neutral (Table 5). AI was overwhelmingly viewed in a positive light in ‘AI-human interaction’, ‘Business, finance, and the economy’, ‘Education’, the ‘Environment’, and ‘Healthcare and medicine’, while it was depicted in a negative light in discussions around ‘Big Brother’, ‘Defense weapons’, ‘Human control over AI’, the ‘News industry’, and ‘South Africa’s preparedness for an AI-driven world. In terms of social progress (i.e., benefits) and competition (i.e., threats), the news coverage of AI across outlets and sources was more positive than negative.

Table 5: Positive, negative or neutral views of dominant AI topics

Topic

Number of articles

Positive

Negative

Neutral

AI-human interaction

12

11

1

0

Big Brother

8

1

7

0

Business, finance, and the economy

18

13

5

0

Cyborgs

1

0

0

1

Defence weapons

2

0

2

0

Education

1

1

0

0

Environment

3

3

0

0

Healthcare and medicine

6

6

0

0

Human control over AI

6

0

6

0

News industry

3

0

3

0

Singularity

2

1

1

0

South Africa’s preparedness for an AI-driven world (AI skills shortage)

6

2

3

1

Strong AI

2

1

0

1

Uncanny valley

3

2

1

0

6. Discussion

6.1 Nature and artifice

In an insightful Royal Society report of 2018, the researchers point to a tendency in fictional narratives to anthropomorphise AI, and this tendency was apparent in many of the articles in our dataset that evoked the frames of nature and artifice (Craig et al., 2018). Below are some typical examples of the anthropomorphisation of AI:

  • Intellectual superiority: “Technology has the ability both to remove [a financial advisor’s] biases and analyse a full array of products, potentially identifying suitable solutions that the advisor may have missed on their own” (The Citizen, 10 March 2018).
  • Human-like senses: “AI noses are now able to smell toxic materials, AI tongues can now taste wines and offer opinions on their taste scores, and robots are now able to touch and feel objects” (Daily Maverick, 12 November 2018).
  • Robot domination: “…robots decide who gets to live and who dies” (Mail & Guardian, 11 April 2018).
  • Life-like robots: a robotic model called ‘Noonoouri’ “is said to be 18 years old and 1.5m tall. The Parisian describes herself as cute, curious and a lover of couture” (SowetanLIVE, 20 September 2018).

The Royal Society report (2018:4) notes that what is concerning about such descriptions is that they instil certain “[e]xaggerated expectations and fears about AI”, and unfortunately also “contribute to misinformed debate, with potentially significant consequences for AI research, funding, regulation and reception”. It is important to point out that not all the articles in our dataset portayed AI in such a way that it was disconnected from reality. In an article in The Citizen entitled ‘China’s doctor shortage prompts rush for AI healthcare’ (20 September 2018), the journalist evoked the frame of nature when she subtly judged AI’s capacity for emotional intelligence by quoting some patients as claiming that they still “prefer the human touch.” She also quoted a technology officer as observing that “It doesn’t feel the same as a doctor yet. I also don’t understand what the result means.” These sentiments echo those of medical informatician Reddy Sandeep (2018:93), who contends that “[c]ontemporary healthcare delivery models are very dependent on human reasoning, patient-clinician communication and establishing professional relationships with patients to ensure compliance”.

6.2 AI as superior to human intelligence

Yet, many articles in the dataset that evoked the frame of nature to portray AI as matching or surpassing human intelligence also questioned this intelligence either by suggesting that AI should be regulated by human beings or by arguing that AI can neither feel nor think creatively. In a SowetanLIVE article published on 20 March 2018, for instance, the journalist questioned AI’s intelligence through the frame of nature: “[a major concern] is the fact that although robots may have AI (Artificial Intelligence), they are not as intelligent as humans. They can never improve their jobs outside their pre-defined programming because they simply cannot think for themselves. Robots have no sense of emotions or conscience. They lack empathy and this is one major disadvantage of having an emotionless workplace.” Only seven articles reflected the view that AI is unequivocally superior to human intelligence (although often doing so through the use of reported speech and/or multiple voices). Such a view “may be detrimental to the public’s understanding of A.I. as an emerging beneficial technology” (Obozintsev, 2018:1), and readers could be forgiven for feeling anxious when confronted by statements such as “educators must consider what skills graduates will need when humans can no longer compete with robots” (Mail & Guardian, 16 February 2018) and “there will come a time where technology will advance so exponentially that the human systems we know will be obliterated” (Mail & Guardian, 4 October 2019). Another article that framed AI as transcending human intelligence was ‘Self-navigating AI learns to take shortcuts: Study’ (The Citizen, 9 May 2018). As is typically the case when the frame of nature was employed, the AI system in this article was romanticised and anthropomorphosised through messages such as “A computer programme modelled on the human brain learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert.” Although the journalist framed the AI system in terms of the claims made about it by its designers, she did not go on to question the claims. David Watson (2019:417), who studies the epistemological foundations of machine learning, argues that “[d]espite the temptation to fall back on anthropomorphic tropes when discussing AI […] such rhetoric is at best misleading and at worst downright dangerous. The impulse to humanize algorithms is an obstacle to properly conceptualizing the ethical challenges posed by emerging technologies”. (We consider how ethical issues surrounding AI were represented in our dataset a little later on in this paper when we discuss morality/ethics, accountability, and Pandora’s Box.)

6.3 AI that looks/sounds like a human or AI that looks/sounds like a robot?

In a number of articles, the frame of nature or the frame of artifice was evoked to depict AI as human-like, and one example was evident in ‘Who’s afraid of robots?’ (Daily Maverick, 5 March 2019) in which the suggestion was made that human and robotic news anchors could become indistinguishable from one another in the near future. By contrast, AI in other articles was described as looking more robot-like. In ‘Robot teachers invade Chinese kindergartens’ (The Citizen, 29 August 2018), for instance, an educational robot called ‘Keeko’ was described as “[r]ound and white with a tubby body” and as an “armless robot” that “zips around on tiny wheels.” In the same article, the journalist quoted a teacher as describing the robot as “adorable”. It is no coincidence that in the dataset, robots that looked like ‘Keeko’ were variously described as “adorable” (The Citizen, 29 August 2018) and “client-friendly” (SowetanLIVE, 20 March 2018), while those who looked or sounded like human beings were framed as “eerie” (SowetanLIVE, 8 March 2018) or “uncanny” (Daily Maverick, 10 November 2019). These types of descriptions constitute a reference to the ‘uncanny valley’, a phenomenon “which describes the point at which something nonhuman has begun to look so human that the subtle differences left appear disturbing” (Samuel, 2019:12). Research studies indicate that individuals perceive robots to be less creepy if they are designed in such a way that they are distinguishable from human beings (MacDorman, 2006; Greco, Anerdi and Rodriguez, 2009). In ‘The rise of the machines looks nothing like the movies’ (Daily Maverick, 10 November 2019), the journalist briefly speculated why most machines do not look like humans: “most do not resemble us, they do not walk on two feet, they do not have pre-programmed facial expressions and human gestures for us to study, for us to suspect, to imbue with sinister motives, real or imagined.” A number of scholars have speculated why most machines do not have a human-like appearance. Samuel (2019: 12), for example, argues that “[while] eliciting social responses in humans is easier when the robot in front of them is human-like in design, this does not mean that robots automatically become more accepted the more human they look. This may initially be the case, but human design appears to reach a point at which positive social responses turn into negative ones and robots are rejected for seeming ‘too human’”. The journalist of an article published in The Citizen on 5 September 2018 showed awareness of this problem when, in evoking the frame of nature, he noted that because a machine called ‘Sophia’ “is designed to look as much like a robot as a human, with its mechanical brain exposed, and no wig in place to humanise her further”, people who encounter her “know they are dealing with a robot and don’t feel fooled into believing it is human.”

Of course it is not a given that individuals simply do not like machines that look human. Indeed, Samuel (2019:9) puts paid to the notion that people are inclined to favour anthropomorphic robots. He argues that “people show a preference for robots’ design to be matched to their task” (Samuel, 2019:9). In this respect, people tend to be positively disposed to human-like features if the robot in question is a social robot. Alternatively, “an industrial robot may be thought of in a different manner and thus does not appear to need to look human in order to be deemed acceptable for their task by a human observer” (Samuel, 2019: 9). In the dataset, a number of journalists appeared to be aware of the connection between appearance and task. For example, in ‘“Call me baby”: Talking sex dolls fill a void in China’ (SowetanLIVE, 4 February 2018), the reporter evoked the frame of nature to describe sex dolls as “shapely” (although the reporter also questioned just how life-like the dolls were, describing one as possessing a “robotic voice” and as having lips that do not move). In addition, the dolls were described as being able to “talk, play music and turn on dishwashers”. Clearly, in order to be regarded as a social companion, such dolls are required to look, sound and act more life-like.

On the subject of robots serving a social function, the journalist of the article just referred to also reported that “buyers can customise each doll for height, skin tone, amount of pubic hair, eye colour and hair colour”. Tellingly, the journalist went on to claim that “the most popular dolls have pale skin, disproportionately swelled breasts (sic) and measure between 158 and 170 centimetres”. The way in which these dolls were described here is similar to the way in which they were described in other articles in the dataset. For instance, and quoting Hanson Robotics (probably to maintain authorial distance), the journalist in an article published in The Citizen (5 September 2018) described the robot ‘Sophia’ as being “endowed with remarkable […] aesthetics”, while in a SowetanLIVE article (28 September 2018), robots serving as fashion models were described as female as well as “lean” or “slender, with dark flawless skin” (SowetanLIVE, 28 September 2018). In the entire dataset most robots driven by AI technology were reported as being female; in addition to Sophia, sex dolls, and robotic fashion models (such as ‘Shundu’ and ‘Noonoouri’), journalists also referred to ‘Alexa’, a virtual assistant (SowetanLIVE, 8 March 2018), ‘Vera’, a robot that assists in interviewing prospective job candidates (The Citizen, 27 April 2018), and ‘Rose’, a robot that sells insurance (The Citizen, 22 January 2020). According to Döring and Poesch (2019:665), the media tend to represent human-robot relationships in terms of “stereotypical gender roles” and “heteronormativity” (cf. Stassa, 206; Ndonye, 2019), although it should be added that in our dataset, the various journalists did not themselves encourage these representations, but merely framed female robots in terms of how they were described by their designers or by the AI industry in general. Disappointingly, with the exception of three journalists who (1) alluded to individuals on Chinese social media platforms expressing their concerns that sex dolls “reinforce(d) sexist stereotypes” (SowetanLIVE, 4 February 2018), (2) questioned whether AI may discriminate against bank loan applicants on the basis of gender (Mail & Guardian, 14 March 2019), and (3) observed that “[w]omen and minorities are grossly […] underrepresented” when AI-droven algorithms are employed, no other journalist in our dataset questioned how the AI industry is reinforcing power relations in which the objectification of women is normalised. This is highly problematic in another important sense: the media need to challenge gender stereotypes because applying gender to an AI-driven application may have serious consequences. In this respect, McDonnell and Baxter (2019:116) point out that “[t]he application of gender to a conversational agent [such as a chatbot system] brings along with it the projection of user biases and preconceptions”.

Remaining with the subject of how AI-driven technology is anthropomorphosised, it is interesting to note that in the dataset, the term ‘AI’ or ‘artificial intelligence’ was often replaced by the word ‘robot’, and there appear to be two reasons for this. First, in articles in which AI was regarded as dangerous, the word ‘robot’ served as a “spacing device” (Jones, 2015:40) in the sense that “it [set] another barrier between the reader and the developer of the technology and [provided] a focus for any negative will”. Jones (2015:40) observes that “[it] seems far easier for the journalist to focus on a physical being than an abstract concept called artificial intelligence”. This was evident in ‘South Africa should lead effort to ban killer robots’ (Mail & Guardian, 11 April 2018) in which the journalist referred to governments around the world producing “killer robots” that “decide who gets to live and who dies”. Here, the term ‘killer robots’ was used in place of the term AI/artificial intelligence, “[providing] a focus for risk concerns about the Other” (Jones, 2015:44). Second, in articles in which AI was regarded in a more positive light, the term ‘robot’ “[served] to assist in the anthropomorphizing of the technology as it is far easier to draw comparisons between a human body and a robot body” (Jones, 2015:40). In an article that appeared in The Citizen on 29 August 2018, the journalists used their own voices as well as that of a teacher to describe Keeko, an educational robot, as “adorable” and as “[reacting] with delight” when children answer questions correctly.” The journalists did not interrogate the societal or ethical consequences of placing such educational robots in classrooms, and even quoted the principal of the kindergaten where Keeko is based as stating that robots are “more stable” than human teachers. According to scholars such as Engstrom (2018:19), “in humanising robots and AI, we have to ask ourselves whether our perception of them as machines changes – for example, whether it causes us to feel empathy or even love for them, and whether it will make us have higher expectations [of] the technologies to carry out human responsibilities”. This is unfortunately not a theme that the journalists in our dataset interrogated.

Why did the journalists in our dataset show a tendency to portray AI-driven technology in human form? A partial answer may lie in how AI is portrayed in film and on television. Brennan (2016:1) speculates that “[i]dentification depends on viewers’ ability to understand characters through the lens of their own experience. As such, it relies on recognisable social categories like gender, age, nationality, class and so on. […] Writers must construct the characters of technological protagonists, or antagonists, using recognisable human traits”. Brennan (206:1) goes on to argue that this may unfortunately “limit the ways that AI and robotics are represented and imagined”. Interestingly, the journalist of a Daily Maverick article published on 10 November 2019 speculated that we are inclined to depict robots in human form “perhaps because of our elevated view of ourselves.”

6.4 Morality/ethics, Pandora’s Box, and accountability: AI as uncontrollable and unregulated

Raquel Magalhães (2019:1), editorial manager of Understanding with Unbabel, argues that what is problematic about humanoid representations of AI is that they detract from real issues, particularly from those that pertain to ethical considerations around data privacy concerns (owing to facial recognition algorithms), the use of biased algorithms in decision making, ‘killer robots’, and the absence of clear policies that help control and regulate the development of AI. However, there does appear to be some light at the end of the tunnel; a recent study by Ouchchy, Coin and Dubljević (2020:1) has found that although the media’s coverage of the ethics of AI is somewhat superficial, it does nevertheless have “a realistic and practical focus”, and our dataset confirmed this finding.

As already noted, a total of 21 articles in the dataset (28.76%) employed the morality/ethics frame, and within this frame, journalists questioned the ethics of fake news (‘Misinformation woes could multiply with “deepfake” videos’, The Citizen, 28 January 2018) data breaches (‘#FaceApp sparks data privacy concerns’, The Citizen, 17 July 2019) video surveillance (‘CCTV networks are ‘driving an AI-powered apartheid SA’, The Citizen, 9 December 2019), and biased algorithms (‘Developing countries need to wake up to the risks of new technologies’, Mail & Guardian, 8 January 2018). All these threats constitute reasonable concerns in the area of AI (Stahl, Timmermans and Mittelstadt, 2016; O’Carroll and Driscoll, 2018), yet they are sometimes overlooked in favour of what Bartz-Beielstein (2019:1) refers to as “the well-known ones such as the weaponisation of AI or the loss of employment opportunities”. In a study in which, amongst other things, she explores the nature of media coverage in AI, Obozintsev (2018) reports that the frame of morality/ethics was rarely employed in her dataset. It is possible that this frame was evoked more often in our dataset because a number of writers were also academics, computer scientists, and technology experts; we speculate that writers in these fields may be particularly interested in considering the political, socio-cultural, economic, and ethical implications of AI in (South) Africa.

As noted in the findings section, 21.91% of all articles evoked the frame of accountability. In doing so, these articles reflected (1) fears about how AI is controlling human beings in terms of their movements/online speech or (2) an emphasis on the need for human beings to control and/or regulate this technology in some way. An example of fears about AI controlling human beings is evident in the title of an article published in The Citizen on 5 August 2019, ‘Whatsapp could soon start censoring what you are saying’, while an example of a call for human beings to control and/or regulate AI is reflected in an 18 July 2019 Daily Maverick article in which the journalist claimed that “At least four rights are threatened by the unregulated growth of AI: freedom of expression, privacy, equality and political participation.” Fears about AI controlling human beings or about AI being uncontrollable was particularly evident in the Pandora’s Box frame which was constructed in three articles (4.10% of the dataset). In ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), for example, the journalist argued that AI-driven technology could unleash a Pandora’s Box and that students in Africa have to be properly trained “so that they gain the insight that will be needed to defend people from forces that may seek to turn individuals into disposable parts”.

What is interesting about these fears is that they mirror one of the the findings of Fast and Horvitz (2017:966) that “[t]he fear of loss of control […] has become far more common in recent years” when it comes to public opinion of AI. Readers will no doubt feel unnerved when they encounter statements such as “[the future] looks to be dominated by machines” (Mail & Guardian, 16 February 2018). Such predictions conjure up AI as being arcane and as a technology understood by only a few; they may be compelled to conclude that AI is beyond their control (cf. Nelkin, 1995:162).

On a positive note, out of the three articles that evoked Pandora’s Box, only one offered no solutions as to how we should control and regulate AI. The remaining articles referred to putting policies in place that will protect users in (South) Africa from the dangers of AI such as those that pertain to privacy issues and autonomous weapons as well as to ensuring that human beings take responsibility for the performance of AI systems. To provide a specific example, in a Daily Maverick article (29 January 2019 ), the writer called for technology and data to be democratised: We […] need to incorporate into the devices of the 4IR a character of the world as we desire it and not make these devices reflect biases, prejudices and unequal economic spaces as they currently exist.” By offering up solutions such as these, journalists challenged the notion that “technology is developed in a vacuum, with the suggestion being that the human user is an afterthought (Jones, 2015:36). Holguín (2018:17) points out that the importance of taking responsibility for AI is sometimes overlooked, “but it seems crucial for understanding the development of technology as depending on human agency. Thus, the improvements and goals of these intelligent systems are not self-driven by the force of technology, but by the decisions of the human actors behind their creation”.

It is encouraging that some of the articles in our dataset (i) identified AI’s potential threats as they relate to morality and ethics and (ii) within the frame of accountability, considered policies and principles that could help regulate these threats in (South) Africa. In ‘Why we need an AI-resilient society’, Bartz-Beielstein (2019:1) refers to strategy (i) as “awareness” and to strategy (ii) as “agreements”. The former strategy refers to helping society recognise the dangers that AI may pose and can be generated “by publishing papers and giving public talks” (Bartz-Beielstein, 2019: 6), for example. Agreements is a strategy that calls for society to generate principles and laws that regulate different aspects of AI.

6.5 Constructing dualistic frames

It is not surprising that just over half of the articles under investigation in this study reflected both pro- and anti-technology discourse since it is an inherent paradox in many news articles about technology including AI (Jones, 2015: 42; Brennan, Howard and Nielsen, 2018; Chuan et al., 2019). In the dataset, the polarised discourse around AI was typically framed in terms of both competition and social progress. This dualistic frame is apparent in a Mail & Guardian article which appeared on 16 March 2020 in which a World Economic Forum claim – “automation will displace 75-million jobs worldwide by 2022” – was juxtaposed with the statement that “AI is reducing the time it takes to generate reports, analyse risks and rewards, make decisions and monitor financial health.” The question is, Why simultaneously frame AI in terms of competition and social progress? Some scholars are of the view that AI may be constructed in dualistic terms to offer up a “digital opiate for the masses” (Floridi, 2016:1), as it were. Philosopher and ethics scholar Luciano Floridi (2019:1) puts it bluntly when he observes that “Fear always sells well, like vampire or zombie movies”: in other words, it is appealing for the mass media to frame AI around dystopian, dualistic narratives (Holguín 2018:5) because this increases readership and ratings (cf. Obozintsev, 2018:1). This view is shared by Dorothy Nelkin who, in a cynically entitled article ‘Selling science’, argues that “too often science in the press is more a subject for consumption than for public scrutiny, more a source of entertainment than for information” (Nelkin, 1995:162). This entertainment factor was apparent in the more sensationalist or alarmist titles in the dataset such as ‘“Call me baby’: Talking sex dolls fill a void in China’ (SowetanLIVE, 4 February 2018), ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), and ‘Ballerina bots of the Amazon job-pocalypse’ (Mail & Guardian, 1 March 2019). Of course, articles about AI may also be alarmist and/or sensationalist because the media are under pressure to succeed within what Davenport and Beck (2001:2) refer to as the “attention economy” (cf. Cave et al., 2018:17) in which clicks and views are highly sought after.

Of interest is that research on dualistic or competing frames indicates that individuals are averse to such frames and therefore attempt to resist them (Sniderman and Theriault, 2004). Obozintsev (2018:15) observes that “exposure to two competing frames can render one frame ineffective, or even counter-effective”, particularly if the frame is not aligned with their belief systems.

6.6 Framing uncertainty through dualistic frames

We argue that making use of a dualistic frame such as one in which AI is couched in terms of both competition and social progress is not necessarily a reflection of bad journalism (cf. Kampourakis, 2019). Indeed, it is not surprising that the media employ competing frames given that AI is an emerging technology characterised by uncertainty and conflict (cf. Hornmoen, 2009:1; Kampourakis and McCain, 2020:152). Notwithstanding the fact that the relationship between science and journalism is complex, Holguín (2018:5) contends that “[when] the scientific community is not able to agree on the possible risks or impact of a new scientific or technological breakthrough, this subject may become salient in newspapers”. Here we propose that the media may highlight uncertainy around AI and its risks or impact by framing this technology in terms of dualistic frames. Hornmoen (2009:16) points out that “[t]he alternation between different perspectives, with an apparently contradictory identification in the journalist’s report, contributes above all to construct an image of an emergent scientific field”. We further suggest that journalists may attempt to resolve the competing frames of competition and social progress in specific ways. In an article published in The Citizen on 19 June 2019, the journalist evoked both the frame of competition to depict AI as destroying jobs and the frame of social progress to portray this technology as creating jobs. The journalist attempted to resolve this dualistic frame by mitigating it: he quoted the vice-president of a software company as claiming that while AI may result in some jobs becoming redundant, AI will also generate labour switching in the sense that it will create “new categories of work.” Quoting Deloitte, he qualified this by reporting that AI will replace menial tasks/manual labour, thus “augmenting the workforce and enabling human work to be reframed in terms of problem solving and the ability to create new knowledge.” What is interesting about this article (and many others in the dataset) is that it did not question the veracity of the claim made in the field of AI that manual labour and menial tasks will be replaced by automated technology. In failing to provide readers with this particular context, we argue that the articles may compel readers to adopt an anti-AI view (cf. Jones, 2015:41). A typical claim about AI and menial tasks is epitomised in “Menial […] tasks that might once have needed the human touch are slowly but surely being replaced with the accuracy of computers” (SowetanLIVE, 31 July 2018). Although it is undeniable that automation is replacing and will continue to replace certain jobs, in the short- and medium-term at least, “[manual] work is likely to remain surprisingly resistant to automation” (Heath, 2014:1, in converation with Michigan Institute of Technology economist Erik Brynjolfsson). This is due to a phenomenon known as Moravec’s Paradox according to which AI researchers have observed that machines find it difficult to perform tasks that humans find easy, and vice versa. One article published in the Daily Maverick (12 November 2018) referenced this paradox when he stated that “Generally, jobs that require gross motor skills are easier to automate than those that require fine motor skills. The jobs that will remain will be those that require a human touch”.

6.7 Employing the middle way frame

In addition to mitigating claims, the journalists in our dataset appeared to resolve the ‘AI as competition’ and ‘AI as social progress’ paradox by adopting a middle way frame. Typically, the journalists recommended a compromise position in which human beings and AI should work together in order to complete a variety of tasks. In an article published in The Citizen on 22 January 2020, for example, the journalist quoted a clinical professor of imaging sciences as suggesting that “the combined forces of human and machine would be better than either alone” in the context of breast cancer detection.

Out of the 40 articles in the dataset that evoked the frames of competition and social progress, 26 employed the middle way frame and 14 did not. We argue that the presence or absence of the middle way frame in articles that reflect the competing frames may influence how readers perceive AI – whether they regard it as threatening or not. In the 14 articles that did not evoke the middle way frame, the coverage of AI was overwhelmingly alarmist in the sense that this technology was framed as replacing or being about to replace human beings. No room was made for a future in which human beings would be able to exercise control over AI-driven technology. A typical example is reflected in an article in The Citizen (4 October 2019) in which the journalist evoked the frame of nature and quoted Elon Musk as claiming that “computers actually are already much smarter than people on so many dimensions.” We noted that some journalists took this claim a step further and employed the frame of artifice to argue that the lines between AI and human beings will blur to such an extent that the former will entirely replace the latter (cf. Jones, 2015:37). In ‘Prepare for the time of the robots’ (Mail & Guardian, 16 February 2018), the journalist used the frame of artifice to claim that human beings will be “cannabilised” by machines that will “outperform [them] in nearly every job function” in the future. It appears that articles in which AI is portrayed as matching or surpassing human intelligence, but in which a middle way frame is used, may be less alarming to readers because the human element is not dismissed. In a SowetanLIVE article of 2 January 2020, a researcher was quoted as claiming that “[a] computer programme can identify breast cancer from routine scans with greater accuracy than human experts.” However, the journalist tempered this claim when he used a middle way frame to quote the same researcher as observing that “[there’s] the opportunity for this technology to support the existing excellent service of the (human) reviewers.”

6.8 Using reported speech and multiple voices

Whether or not the middle way frame is employed, we also propose that journalists may attempt to resolve uncertainties around AI through the use of quotations/reported speech (Cotter, 2010:174) and multiple perspectives (Hornmoen, 2009:78): “Due to the technical complexity of the latest developments [in] the field and the uncertainty of its predictions around the impact, it seems probable that journalists will count on external sources that to a greater or lesser extent allow them to report on the topic and ‘validate’ their claims and arguments” (Holguín, 2018:7). Examples of the use of reported speech are evident in the section just before this one. Studying why and how journalists employ reported speech is a research paper on its own, but Calsamiglia and Ferrero (2003:44) observe that journalists may use reported speech “as a means of orientating their position on the topic of reference” and absolving themselves from “their responsibility to inform objectivity”. Another device we identified was the use of multiple voices through reference to formal reports, tests, and academic studies. We see this device operating in ‘Is your job safe from automation?’ (SowetanLIVE, 20 March 2018) in which the journalist stated that “According to a new Accenture report, one in three jobs in South Africa (5.7 million jobs) is currently at risk of total automation.” Use of reported speech and reference to formal reports, tests, and studies allow journalists to establish multiple perspectives which “play a major role in constructing popular understanding of the science in question” (Dunwoody, 1999:69; cf. Hornmoen, 2009:4), particularly if that science is marked by controversy and/or uncertainty. As far as the latter is concerned, Holguín (2018:7) suggests that an over-reliance on ‘experts’ means that journalists avoid providing critical judgements about the risks and impact of AI. In a 4 June 2019 Daily Maverick article, for instance, the journalist claimed that “Soon AI will drive our cars, stock our warehouses and take care of our loved ones. It holds much promise, and industry players say it is on the brink of explosion.” To validate the promise that AI holds, the journalist then quoted a number of experts and referred to “the AI Maturity Report”, which reports that “local organisations invested around R23.5-billion in AI over the last decade.” Other than briefly acknowledging that AI must be driven by human beings, the journalist did not critically interrogate the possible risks of AI.

Looking more closely at our dataset, what is problematic is that the use of multiple voices did not necessarily mean that an article was “multiperspectival” (Hornmoen, 2009:79): “closer inspection may reveal that the text is primarily advancing ‘ways of seeing’ and the rhetoric of a particular group of researchers” (Hornmoen, 2009:79). When it came to articles in our dataset that reflected competing frames, we had to determine which frames were made more salient to promote a particular view of AI (cf. Hornmoen, 2009:81). Consider, for example, ‘Will your financial advisor be replaced by a machine?’ published in The Citizen on 10 March 2018. In this article, AI was framed as a paradox in the sense that it was described in terms of competition (i.e., as leading to loss of jobs for financial advisors) and in terms of social progress (i.e., as helping financial advisors become more creative). The question in the title was repeated in the article: “will [financial advisors] become redundant altogether?.” Through reference to multiple voices (in the form of quotations from financial experts), the journalist constructed a middle way frame when he argued that AI will not replace financial advisors and that machines and human beings will work together to provide clients with financial advice. Returning to Calsamiglia and Ferrero’s (2003) study of reported speech, it appears that the use of reported speech in this case allowed the journalist to orientate his position on the topic of machines replacing human beings.

7. Conclusions

Like other studies on media portrayals of AI, our study signals that coverage in widely circulated South African newspapers tended to veer between utopian and dystopian views of this technology, although most articles reflected a more positive view since they evoked the frame of social progress more frequently than they evoked the frame of competition. In other words, AI was portrayed as friend more frequently than it was portrayed as foe. A pro-AI stance was particularly evident in the areas of ‘AI-human interaction’, ‘Business, finance, and the economy’, ‘Education’, the ‘Environment’, and ‘Healthcare and medicine’. Those articles that had an anti-technology stance, and that focused on threats/competition, were dominated by moral/ethical considerations around ‘Big Brother’, ‘Defence weapons’, ‘Human control over AI’, the ‘News industry’, and ‘South Africa’s preparedness for an AI-driven world’.

We argue that the employment of both the frames of social progress and competition may enable journalists to construct AI as an emerging and uncertain technology. We propose that future research should explore how uncertainty/conflict generated by journalists about AI is processed by readers, since the effect of this uncertainty/conflict is not known: “identifying and testing uncertainty-inducing message features is crucial as uncertainty is a complex cognition that can trigger or reduce both positive states […] and negative states” (Jensen and Hurley, 2012:690). As already mentioned, it does appear that a reader’s exposure to conflicting frames may cause a frame to be rendered ineffective or even counter-effective (Obozintsev, 2018:15).

In most articles in which AI was framed in terms of both pro- and anti-technology ideologies, journalists also made use of the middle way frame, which we argue allowed them to establish a compromise position between AI as friend and foe.

Of interest is that many articles made use of anthropomorphic tropes when discussing the nature of AI and these tropes overwhelmingly and unrealistically framed AI as either matching or surpassing human intelligence. Yet, several articles also subtly judged the intelligence of machines by questioning whether they had the capacity to think and feel. Others touched upon human agency in the development of AI in (South) Africa or considered human agency in more detail, discussing how AI’s growth and implementation should be governed and regulated for the sake of transparency and accountability. The call for human agency is critical as it steers society in the direction of an AI paradigm that draws on Ubuntu and that constructs AI as a technology that orbits around humanity, social justice, and community engagement (Nayebare, 2019:50-51). The public in South Africa and the rest of Africa constitute underrepresented voices in the field of AI (cf. Cisse, 2018), and the media have an important role to play in making sure that they are informed about AI and play a role in its implications for their lives and futures.

Footnotes

1 We acknowledge that since we examined articles published by only four news outlets, our results may not be representative of frames employed by other outlets.

Reference list

Badenhorst, J. 2016. The robots are coming for your jobs. News24, 29 September. Available: https://www.news24.com/xArchive/Voices/the-robots-are-coming-for-your-jobs-20180719 [Date of access: 18 March 2020].

Barocas, S. & Selbst, A.D. 2016. Big data’s disparate impact. California Law Review 104: 671-732.

Bartz-Beielstein, T. 2019. Why we need an AI-resilient society. arXiv preprint arXiv:1912.08786.

Bergstein, B. 2017. The great AI paradox. MIT Technology Review, 15 December. Available: (https://www.technologyreview.com/s/609318/the-great-ai-paradox/ (Date of access: 2 April 2020).

Bishop, J.M. 2016. Singularity, or how I learned to stop worrying and love artificial intelligence. Pp. 267-281 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.

Borgesius, Z.F. 2018. Discrimination, artificial intelligence, and algorithmic decision-making. Strasbourg: Council of Europe, Directorate General of Democracy.

Brennan, E. 2016. Why does film and television sci-fi tend to portray machines as being human?. Communicating with Machines. ICA Post-conference. Fukuoka Sea Hawk Hotel, Fukuoka, Japan, 14 June 2016. Available: https://arrow.tudublin.ie/cgi/viewcontent.cgi?article=1043&context=aaschmedcon (Date of access: 2 April 2020).

Brennen, J.S., Howard, P.N. & Nielsen, R.K. 2018. An industry-led debate: How UK media cover artificial intelligence. RISJ Fact-Sheet. Oxford, UK: University of Oxford.

Broadbent, E. 2017. Interactions with robots: The truths we reveal about ourselves. Annual Review of Psychology 68: 627-652.

Brossard, D. 2013. New media landscapes and the science information consumer. Proceedings of the National Academy of Sciences 110(Suppl. 3): 14096-14101.

Broussard, M., 2018. Artificial unintelligence: How computers misunderstand the world. Cambridge, MA: MIT Press.

Bughin, J. and Hazan, E. 2017. The new spring of artificial intelligence: A few early economies. VOX, 21 August. Available: https://voxeu.org/article/new-spring-artificial-intelligence-few-early-economics [Date of access: 17 March 2020].

Calsamiglia, H. and Ferrero, C. 2003. Role and position of scientific voices: Reported speech in the media. Discourse Studies 5(2): 147-173.

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B. and Taylor, L. 2018. Portrayals and perceptions of AI and why they matter. Available: https://royalsociety.org/-/media/policy/projects/ai-narratives/AI-narratives-workshop-findings.pdf [Date of access: 2 February 2020].

Chuan, C.H., Tsai, W.H.S. and Cho, S.Y. 2019. Framing artificial intelligence in American newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society: 339-344.

Cisse, M. 2018. Look to Africa to advance artificial intelligence. Nature 562(7728): 461-462.

Cook, T.S. 2019. The importance of imaging informatics and informaticists in the implementation of AI. Academic Radiology 27: 113-116.

Cotter, C. 2010. News talk: Investigating the language of journalism. New York, NY: Cambridge University Press.

Curioni, A. 2018. Artificial intelligence: Why we must get it right. Informatik-Spektrum 41(1): 7-14.

Davenport, T. and Beck, J. 2001 Attention economy: Understanding the new currency of business. Boston, MA: Harvard Business Review Press.

De Spiegeleire, S., Maas, M. and Sweijs, T. 2017. Artificial intelligence and the future of defense: strategic implications for small-and medium-sized force providers. The Netherlands: The Hague Centre for Strategic Studies.

Döring, N. and Poeschl, S. 2019. Love and sex with robots: A content analysis of media representations. International Journal of Social Robotics 11(4): 665-677.

Elish, M.C. and Boyd, D. 2018. Situating methods in the magic of Big Data and AI. Communication Monographs 85(1): 57-80.

Emanuel, C.K. 2016. The end of radiology? Three threats to the future practice of radiology. Journal of the American College of Radiology 13: 1415-1420.

Entman, R.M. 1993. Framing: Toward clarification of a fractured paradigm. Journal of Communication 43: 51-68.

Fast, E. and Horvitz, E. 2017, February. Long-term trends in the public perception of artificial intelligence. Pp. 963-969 in S.P. Satinder and S. Markovitch (Eds), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. San Fransisco, CA: AAAI Press.

Feliciano, D. 2019. In Brazil, ‘AI Gloria’ will help women victims of domestic violence’. The Rio Times, 29 April. Available: [Date of access: 17 April 2020].

Floridi, L. 2019. What the near future of artificial intelligence could be. Philosophy & Technology (2019) 32: 1–15.

Garvey, C. and Maskal, C. 2019. Sentiment analysis of the news media on artificial intelligence does not support claims of negative bias against artificial intelligence. Omics: A Journal of Integrative Biology 23(0): 1-14.

Gastrow, M. 2015. The stars in our eyes: Representations of the Square Kilometre Array telescope in the South African media. Unpublished Doctoral dissertation. Stellenbosch, South Africa: Stellenbosch University.

Gockley R., Bruce A., Forlizzi J., Michalowski M., Mundell A., Rosenthal S., Sellner B., Simmons R., Snipes K., Schultz A. and Wang, J. 2005. Designing robots for long-term social interaction. In 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems. IROS 2005: 1338-1343.

Greco, A., Anerdi, G. and Rodriguez, G. 2009. Acceptance of an animaloid robot as a starting point for cognitive stimulators supporting elders with cognitive impairments. Revue d’Intelligence Artificielle 23(4): 523-37.

Haenlein, M. and Kaplan, A. 2019. A brief history of artificial intelligence: On the past, present, and future of artificial intelligence. California Management Review 61(4): 5-14.

Hauert, S. 2015. Shape the debate, don’t shy from it. Nature 521(7553): 416-417.

Heath, N. 2014. Why AI could destroy more jobs than it creates, and how to save them. TechRepublic, 18 August. Available: https://www.techrepublic.com/article/ai-is-destroying-more-jobs-than-it-creates-what-it-means-and-how-we-can-stop-it/ [Date of access: 7 April 2020].

Hoes, F. 2019. The Importance of ethics in artificial intelligence (or any other form of technology for that matter). Towards Data Science, 30 December. Available: https://towardsdatascience.com/the-importance-of-ethics-in-artificial-intelligence-16af073dedf8?gi=5314dcbcb910 [Date of access: 3 April 2020].

Holguín, L.M. 2018. Communicating artificial intelligence through newspapers: Where is the real danger?. Available: https://mediatechnology.leiden.edu/images/uploads/docs/martin-holguin-thesis-communicating-ai-through-newspapers.pdf [Date of access: 3 April 2020].

Hornmoen, H. 2009. What researchers now can tell us: Representing scientific uncertainty in journalism. Observatorio 3(4): 1-20.

Jensen, J.D. and Hurley, R.J. 2012. Conflicting stories about public scientific controversies: Effects of news convergence and divergence on scientists’ credibility. Public Understanding of Science 21(6): 689-704.

Jones, S. 2015. Reading risk in online news articles about artificial intelligence. Unpublished MA dissertation. Edmonton, Alberta: University of Alberta

Kabu, N. 2017. A content analysis of scientific news in two South African daily newspapers. Unpublished Doctoral dissertation. Johannesburg, South Africa: University of the Witwatersrand.

Kampourakis K. 2019. How Are the uncertainties in scientific knowledge represented in the public sphere?: The genetics of intelligence as a case study. Pp. 288-305 in K. McCain and K. Kampourakis (Eds), What is scientific knowledge? New York, NY: Routledge.

Kampourakis, K. And McCain, K. 2020. Uncertainty: How it makes science advance. USA: Sheridan Books, Incorporated.

Kanda T., Hirano T., Eaton D. and Ishiguro, H. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human-Computer Interactaction 19(1): 61-84.

Kirk, J. 2019. The effect of artificial intelligence (AI) on emotional intelligence (EI). Capgemini, 19 November. Available: https://www.capgemini.com/gb-en/2019/11/the-effect-of-artificial-intelligence-ai-on-emotional-intelligence-ei/ [Date of access: 3 April 2020.]

Krippendorff, K. 2013. Content analysis: An introduction to its methodology. Los Angeles, CA: Sage.

Krittanawong, C., 2018. The rise of artificial intelligence and the uncertain future for physicians. European Journal of Internal Medicine 48: e13-e14.

Lara, F. and Deckers, J. 2019. Artificial Intelligence as a Socratic Assistant for Moral Enhancement. Neuroethics : 1-13.

Lele, A. 2019. Disruptive technologies for the militaries and security. Singapore: Springer.

Lorentz, C. 2018. Is the second artificial intelligence winter just around the corner? NetApp, 13 February. Available: https://blog.netapp.com/is-the-second-artificial-intelligence-winter-just-around-the-corner/ [Date of access: 17 March 2020].

MacDorman, K.F. 2006. Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science: 26-29.

Maclure, J. 2019. The new AI spring: A deflationary view. AI & SOCIETY: 1-4.

Magalhães, R. 2019. Expectations vs. Reality: AI narratives in the media. Understanding with Unbabel, 18 October. Available: unbabel.com/blog/artificial-intelligence-media/ [Date of access: 16 April 2020].

Mbuthia, W. 2018. The rise of sex robots and the controversy that comes with them. Available: https://www.standardmedia.co.ke/evewoman/article/2001266355/sex-robots-a- necessary-evil-or-a-pure-curse (Date of access: 3 April 2020).

Mondal, B. 2020. Artificial Intelligence: State of the Art. Pp. 389-425 in V.E. Balas, R. Kumar and R. Srivastava (Eds), Recent trends and advances in artificial intelligence and Internet of Things. Cham, Switzerland: Springer.

Maruyama, Y. 2020. Quantum physics and cognitive science from a Wittgensteinian perspective: Bohr’s classicism, Chomsky’s universalism, and Bell’s contextualism. Pp. 375-408 in S. Wuppuluri and N. da Costa (Eds), WITTGENSTEINIAN (adj.): Looking at the world from the viewpoint of Wittgenstein’s philosophy. p. 375-407). Cham, Switzerland: Springer Nature Switzerland AG.

McCarthy, J., Minsky, M.L., Rochester, N. and Shannon, C.E. 2006. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine 27(4): 12-14.

McCulloch, W.S. and W. Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The Bulletin of Mathematical Biophysics 5(4): 115-133.

McDonnell, M. and Baxter, D. 2019. Chatbots and gender stereotyping. Interacting with Computers 31(2): 116-121.

Müller, V.C. 2016. Editorial: Risks of artificial intelligence. Pp. 1-8 in V.C. Müller (Ed), Risks of general intelligence. London, UK: CRC Press – Chapman & Hall.

Nayebare, M., 2019. Artificial intelligence policies in Africa over the next five years. XRDS: Crossroads, The ACM Magazine for Students 26(2): 50-54.

Ndonye, M.M. 2019. Mass-mediated feminist scholarship failure in africa: normalised body-objectification as artificial intelligence (AI). Editon Consortium Journal of Media and Communication Studies (ECJMCS) 1(1): 1-8.

Nelkin, D. 1995. Selling science: How the press covers science and technology. New York, NY: W.H. Freeman and Company.

Ng, A. 2018. AI transformation playbook. Landing AI, 13 December. Available: https://landing.ai/ai-transformation-playbook/ Date of access: 3 April 2020].

Nisbet, M.C. 2009a. The ethics of framing science. Pp. 51-73 in B. Nerlich, B. Larson and R. Elliott (Eds), Communicating biological sciences: Ethical and metaphorical dimensions. London, UK: Ashgate.

Nisbet, M.C. 2009b. Framing science. A new paradigm in public engagement. Pp. 1-32 in L. Kahlor and P. Stout (Eds), Understanding science: New agendas in science communication. New York, NY: Taylor and Francis.

Nisbet, M.C., 2016. The ethics of framing science. Pp. 51-74 in Nerlich, B., Elliott, R. And Larson, B. (Eds), Communicating biological sciences. USA: Routledge.

Obozintsev, L. 2018. From Skynet to Siri: An exploration of the nature and effects of media coverage of artificial intelligence. Unpublished Doctoral thesis. Newark, Delaware: University of Delaware.

O’Carroll, E. And Driscoll, M. 2018. ‘2001: A Space Odyssey’ turns 50: Why HAL endures. The Christian Science Monitor, 3 April. Available: https://www.csmonitor.com/Technology/2018/0403/2001-A-Space-Odyssey-turns-50-Why-HAL-endures [Date of access: 16 April 2020].

Orr, D. 2017. At last, a cure for feminism: Sex robots. Available: https://www.theguardian.com/commentisfree/2016/jun/10/feminism-sex-robots-women-technology-objectify [Date of access: 3 April 2020].

Osoba, O.A. and Welser, W. 2017. The risks of artificial intelligence to security and the future of work. Santa Monica, California: Rand Corporation.

Ouchchy, L., Coin, A. and Dubljević, V. 2020. AI in the headlines:The portrayal of the ethical issues of artificial intelligence in the media. AI & SOCIETY: 1-10.

Pakdemirli, E. 2019. Artificial intelligence in radiology: Friend or foe? Where are we now and where are we heading? Acta Radiologica Open 8(2): 1-5.

Perez, J.A., Deligianni, F., Ravi, D. and Yang, G.Z. 2018. Artificial intelligence and robotics. London: UK-RAS Network.

Piekniewski, F. 2018. AI winter is well on its way. Piekniewski’s Blog, 28 May. Available: https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/ [Date of access: 17 March 2020].

Proudfoot, D. 2011. Anthropomorphism and AI: Turingʼs much misunderstood imitation game. Artificial Intelligence 175(5-6): 950-957.

Quer, G., Muse, E.D., Nikzad, N., Topol, E.J. and Steinhubl, S.R. 2017. Augmenting diagnostic vision with AI. The Lancet 390(10091): 31764-31766.

Ray, R. 2018. Andrew Ng sees an eternal springtime for AI. ZDNet, 13 December. Available: https://www.zdnet.com/article/andrew-ng-sees-an-eternal-springtime-for-ai/ [Date of access: 17 March 2020].

Reddy, S., 2018. Use of artificial intelligence in healthcare delivery. Pp. 81-97 in T.F. Heston (Eds), eHealth-Making Health Care Smarter. London, UK: IntechOpen.

Rochyadi-Reetz, M., Arlt, D., Wolling, J. And Bräuer, M. 2019. Explaining the media’s framing of renewable energies: An international comparison. Frontiers in Environmental Science 7: Article 119.

Rosenblatt, F. 1958. The perceptron: a probabilistic model for information storage and organization in the brain. Psychological Review 65(6): 386-408.

Samuel, J.L. 2019. Company from the uncanny valley: A psychological perspective on social robots, anthropomorphism and the introduction of robots to society. Ethics in Progress 10(2): 8-26.

Schumann, S. 2019. Probability of an approaching winter. Towards Data Science, 17 August. Available: https://towardsdatascience.com/probability-of-an-approaching-ai-winter-c2d818fb338a [Date of access: 17 March 2020].

Shin, Y. 2019. The spring of artificial intelligence in its global winter. IEEE Annals of the History of Computing 41(4): 71-82.

Siau, K. and Wang, W. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal 31(2): 47-53.

Siegel, E. 2019. The media’s coverage of AI is bogus. Scientific American, 20 November. Available: https://blogs.scientificamerican.com/observations/the-medias-coverage-of-ai-is-bogus/ [Date of access: 16 April 2020].

Sinur, J. 2019. So how goes that AI spring? Forbes, 29 April. Available: https://www.forbes.com/sites/cognitiveworld/2019/04/29/so-how-goes-that-ai-spring/#1c18387a23d4 [Date of access: 17 March 2020].

Sniderman, P.M. and Theriault, S.M. 2004. Pp. 133-165 in W.E. Saris and P.M. Sniderman, (Eds), Studies in public opinion. Princeton, NJ: Princeton University Press.

Staff reporter. 2017. Machine learning may erase jobs, says Yudala. Daily Times, 28 August. Available: https://dailytimes.ng/machine-intelligence-ai-may-erase-jobs-says-yudala/ [Date of access: 17 April 2020].

Stahl, B.C., Timmermans, J. and Mittelstadt, B.D. 2016. The ethics of computing: A survey of the computing-oriented literature. ACM Computing Surveys (CSUR) 48(4): 1-38.

Stassa, E. 2016. Are sex robots unethical or just unimaginative as hell? Available: https://jezebel.com/are-sex-robots-unethical-or-just-unimaginative-as-hell-1769358748 [Date of access: 3 April 2020].

Y. A. Strekalova (2015). Informing Dissemination Research: A Content Analysis

of US Newspaper Coverage of Medical Nanotechnology News. Science

Communication, 37(2), 151-172.

Strekalova, Y.A. 2015. Informing dissemination research: A content analysis of US newspaper coverage of medical nanotechnology news. Science Communication 37(2): 151-172.

Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433-460.

Vincent, J. 2016.These These are three of the biggest problems facing today’s AI. The Verge, 10 October. Available: https://www.theverge.com/2016/10/10/13224930/ai-deep-learning-limitations-drawbacks [Date of access: 3 April 2020].

Watson, D. 2019. The rhetoric and reality of anthropomorphism in artificial intelligence. Minds and Machines 29: 417-440.

Złotowski, J., Yogeeswaran, K. and Bartneck, C. 2017. Can we control it? Autonomous robots threaten human uniqueness, safety, and resources. International Journal of Human-Computer Studies 100: 48-54.