This short piece makes the claim that it is impossible to consider taming Artificial Intelligence in education, specifically in our areas of interest – policy and governance. While AI is a broad area, here we are discussing not artificial general intelligence, but more limited and specific forms of AI that encompass a wide range of techniques and tasks that aim ‘to make computers do the sorts of things that minds can do’ (Boden, 2016, p. 1). This involves efforts to produce ‘systems that think like humans, systems that act like humans, systems that think rationally, systems that act rationally’ (Russell & Norvig, 2016, pp. 2-3). Many of the primary techniques of AI use existing data sets that are historical and spatial. For example, machine learning makes predictions using algorithms that constantly learn and adapt from training data in order to identify patterns in new data sets (Mackenzie, 2015, pp. 4-5). Narrow forms of AI are already part of education systems, a lineage going back to the early teaching machines and intelligent tutoring systems (Watters, 2021). Contemporary AI in governance includes applying machine learning in data science approaches and in student information systems, and ambitious Silicon Valley giants that aim to disrupt education systems and sectors through platform education. The latter became part of everyday life during the COVID19 pandemic when schools across the world were closed and students forced to undertake education remotely at home, using common platforms like Google Classroom (Williamson, Macgilchrist, & Potter, 2021).
Alongside the promises of AI in education is a global policy push to try and understand, intervene, and tame automated systems, from the European Union to the OECD. Yet this notion of some sort of discrete action upon technology is problematic. Not just because we can easily see that technology is co-constitutive of social, political and cultural relations (Mackenzie, 2002), and that AI can be seen as ‘self-augmenting system’ that is neither controllable nor in control (Roden, 2015). But also that these ways of seeing technologies challenge us to think about the nature of control in education.
Deleuze, following Foucault, argued in the late 20th century that we would need to think not only of power as the disciplining gaze, of containment and boundaries, but also as an open, modulating form of control. As Deleuze notes, ‘[w]e’re moving toward control societies that no longer operate by confining people but through continuous control and instant communication’ (Deleuze, 1995, p. 174). Nevertheless, we are not witnessing the end of disciplinary institutions like education so much as the transformation of these institutions. In the 21st century, education continues to promise that learning and socialisation can be controlled, and education systems can be made to serve nation building through economic and innovation agendas. In some instances, these institutions — and the forms of governance that operate across them — have become more intensely disciplinary precisely through control. Think here of performative and accountability regimes around teachers and schools.
A particular type of control is exemplified by the way AI is enabled through what has been variously called algorithmic or digital education governance, which describes the overlap of datafication and machines in governance processes (Williamson, 2016). The use of these technical systems introduces new actors and organisations into education as part of a combination of machines and humans in the process of decision-making. But while the actors and technologies may be new, they build on the foundations of the administration of mass schooling. What has emerged in control societies is a political rationality of prediction in education policy and governance; a rationality, or ‘policy scientificity,’ congruent with the rise of technical approaches to the provision and administration of schools and systems. The rise of technical approaches in education, parallels the introduction of the policy sciences and systems thinking, established in the 1950s, and which continue today as form of cybernetic, techno-rationality in governance.
A cybernetic, techno-rationality builds on the historical practices of modern education systems that are predicated on acquiring information about the performance of students and then issuing with them credentials underwriting the authenticity of that information (Lawn, 2013). What is becoming more evident is a perpetual anticipation of control in education, or the desire for more information, evidence and knowledge based on the view that it will allow us to ‘take control,’ make the correct decisions and govern the future (Amsler & Facer, 2017; Ramiel & Dishon, 2021; Webb, Sellar, & Gulson, 2020). And yet, as more and more systems are established to increase control, education becomes less controllable due to: (i) a proliferation of behavioural feedback loops that can have unintended consequences; (ii) the creation of new networks that incorporate diverse actors in governance, including platforms and algorithms that act as ‘black-boxes’; and, (iii) the increasing messiness of steering at a distance through data infrastructures and the probabilistic rationalities and prediction enabled by data sciences. Generating more and more complex information systems in education provides us with more information, but it reduces what we know about this information. As Bridle posits, ‘the more obsessively we attempt to compute the world, the more unknowably complex it appears’ (Bridle, 2018, p. 46).
A possible way to begin understanding the unknowability and uncertainty of contemporary is what we propose as synthetic governance. This is built upon the idea that education governance is now part of network infrastructures that are ‘medium of contemporary power, and yet no single subject or group absolutely controls a network’ (Galloway & Thacker, 2007, p. 5). Synthetic governance is an amalgamation of: i) human classifications, rationalities, values, and calculative practices; and ii) new algorithms, data infrastructures and AI, comprising non-human political rationalities that are changing how we think about thinking. This synthesis creates new potential through automation for thought and action (Parisi, 2016), in education governance contexts. Synthetic governance is not human or machine governance, but human and machine governance, arising from ‘conjunctive syntheses’ (Deleuze & Guattari, 1983) that bring together and integrate data-driven human rationalities and computational rationalities, traversing both machines and bodies. We suggest performance and administrative data are increasingly being generated, collected and analysed in various configurations in order to govern synthetically. The synthesis of human and machine rationalities — ubiquitous, invisible, hybrid and networked — challenges the idea that education is a controllable site of action and progress. We think this synthesis also poses particular challenges to the idea of taming AI.
Increasingly there are global efforts made to ensure AI is more trustworthy and explainable (Bareis & Katzenbach, 2021). These efforts are part what Zuboff describes as a strategy of ‘taming’ the digital platforms that constitute ‘surveillance capitalism’ (Zuboff, 2019). If tamed, then technology can be directed towards socially progressive ends by utilising existing regulatory and legislative tools (Pasquale, 2020b). As such, a taming strategy can include the introduction of legal instruments such as the European Union’s General Data Protection Regulation and the EU proposal for a legal framework for AI.1 Regulation has been the primary mode of politics for technology in many social areas, from bans on facial recognition to issues of bias and privacy. However, it is not easy to tame large and powerful companies. We will need to consider how to regulate technology companies not only in education, but also AI and platform companies such as Google that are often located both inside and outside education (Pasquale, 2020a).
A focus on regulation locates action firmly within a desire for technology to continue to be part of the Enlightenment project of political and social progress. This approach sits most comfortably with leftist politics of technology, or what Zuboff calls the strategy of indignation, arguing that surveillance capitalist utilisations of data-driven technologies are not inevitable. For Zuboff, indignation arises from dissatisfaction with the mediation of our lives through digital platforms, and the rendition of our experience into data; it ‘teaches us how we do not want to live’ (Zuboff, 2019, p. 524). Various proposals for a more democratic approach to AI and emerging technologies have been put forward. Sadowski calls for the need to ‘democratize innovation’ to create alternative technology, which includes broader participation alongside ‘ensuring intelligent systems are also intelligible to the public’ (Sadowski, 2020, pp. 177-178). Similar proposals have been made by groups such as AI Now, who have lobbied for algorithmic openness and banning the use of opaque algorithms in public services (Campalo, Sanfilippo, Whittaker, & Crawford, 2017).
But, what if we don’t talk about taming AI? It seems that resisting the deleterious impacts of automation will not be a matter of simply regulating or refusing the use of AI. This is not going to stop the development and use of AI, nor does such a position reckon with the genealogy of the present moment and the long history of statistical reasoning in education governance which has brought us to our current position.
What we are suggesting is not some sort of abrogatory narrative, whereby disruptive AI and associated forms of technology run rampant through education. Rather, as Winner argues, the problem is not so much technological determinism, but ‘what might be called technological somnambulism — how we so willingly sleepwalk through the process of reconstituting the conditions of human existence’ (Winner, 1983, p. 254). While the instrumental, market rationalities underpinning education today will likely be reinforced by machines, it may also be possible that new rationalities and techniques create other ways of thinking about the problems of education governance, disrupting longstanding problems and solutions in education.
A taming position, as outlined above, depends on an assumed distinction between human and machine, locating political agency in a human subject who promotes, enhances, tames, regulates, hides from, or even destroys machines. An alternative is a politics premised on a non-dichotomous view of human and machine and that works with the uncertainty that is the departure point and destination of new data-driven technologies. A politics adequate to synthetic governance would not juxtapose human agency and technological determinism, rather we would need to consider how to more consciously navigate this reconstitution. It will not always be possible to break open the black box of AI and digital platforms, because this assumes a particular form of separation between human and machine that is not always tenable. Rather, we might ask: What kinds of worlds are being created by algorithms, and how will we respond to the types of new thinking and truths being created? What are the limits of rectification, of appeal, and regulation? This perspective challenges some of the primary approaches to automation, and the technical and political solutions to issues of bias and black boxing, which reflect a desire for a human in the loop as a corrective or safeguard.
This does not mean that politics is impossible, but perhaps not a politics of the deliberative variety. There is a sense in which the education politics that served the 20th century is unlikely to be one that serves the 21st century. If the premise was dialogic politics (regardless of whether we could be rightly suspicious this was ever the actual practice), it seems very clear that the 21st century is now one of explicit and recuperated cybernetic capitalism (Peters, 2015). We think it is vitally important that we develop a critical synthetic politics that responds not to fears that technology will get away from us (the singularity) so much as the politics of networks that become so diffuse as to resist meaningful intervention. This would be a synthetic politics of education, which begins from the premise that there is no outside of algorithmic decision-making and automated thinking. We must think with and through our imbrications with other modes of cognition as a kind of ‘co-learning’ with automated systems (Walsh, 2017). A particular rationality is needed — to be open to the co-adaptation of humans and machines by recognising that machine learning is the latest iteration in a longer history of thought that has never been limited to the human.
Certainly, there is need for a more robust discussion about regulation, data privacy, and the monetisation of data generated in education. At the least in education we should consider banning AI technologies which have proven harmful, such as the effects of using facial recognition for people of colour (Stark, 2019). That said, a framing of technology as constitutive of the social, cultural and political means understanding that any discussion of taming AI is connected to what we think the conditions of education are and should be. We should be cognisant that the same charges against AI can be made against education – for while mass education has been a pathway for improvement for millions, it also carries with it deleterious consequences, especially for racialized peoples (Gillborn, 2008). We might want to ask very seriously whether education is worth recuperating with its current structures and practices, particularly those of formal schooling (Ball & Collet-Sabé, 2021).
As such, education is perhaps a site where we can look to embrace the synthesis of machines and humans, can remain open to the uncertainties, risks and possibilities with a carefully articulated view of the risks, rather than reacting against AI, embracing it uncritically or suggesting that education is an innocent project needing protection from AI.
The above argument is a synopsis of the full-length treatment forthcoming in: Gulson, K. N., Sellar, S., & Webb, P. T. (2022). Algorithms of Education: How Datafication and Artificial Intelligence Shapes Policy. University of Minnesota Press.
Amsler, S., & Facer, K. (2017). Contesting anticipatory regimes in education: exploring alternative educational orientations to the future. Futures, 94, 6–14.
Ball, S., & Collet-Sabé, J. (2021). Against school: an epistemological critique. Discourse: Studies in the Cultural Politics of Education, 1–15.
Bareis, J., & Katzenbach, C. (2021). Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values.
Boden, M. A. (2016). AI: Its nature and future. Oxford University Press.
Bridle, J. (2018). New dark age: Technology and the end of the future. Verso.
Campalo, A., Sanfilippo, M., Whittaker, M., & Crawford, K. (2017). AI Now 2017 Report. AI Now.
Deleuze, G. (1995). Negotiations: 1972-1990, trans. Martin Joughin. Columbia University Press.
Deleuze, G., & Guattari, F. (1983). Anti-Oedipus. University of Minnesota Press.
Galloway, A. R., & Thacker, E. (2007). The exploit: A theory of networks. University of Minnesota Press.
Gillborn, D. (2008). Racism and education: Coincidence or conspiracy? Routledge.
Lawn, M. (2013). Introduction: The rise of data in education In M. Lawn (Ed.), The rise of data in education systems: Collection, visualisation and use (pp. 7–10). Symposium.
Mackenzie, A. (2002). Transductions: Bodies and machines at speed. Continuum.
Mackenzie, A. (2015). The production of prediction: What does machine learning want? European Journal of Cultural Studies, 18(4–5), 429–445.
Parisi, L. (2016). Automated thinking and the limits of reason. Cultural Studies ↔ Critical Methodologies, 16(5), 471–481.
Pasquale, F. (2020a). Internet nondiscrimination principles revisited: Working paper.
Pasquale, F. (2020b). New Laws of Robotics: Defending Human Expertise in the Age of AI. Harvard University Press.
Peters, M. A. (2015). The university in the epoch of digitial reason: Fast knowledge in the circuits of cybernetic capitalism. Analysis and Metaphysics, 14, 38–58.
Ramiel, H., & Dishon, G. (2021). Future uncertainty and the production of anticipatory policy knowledge: the case of the Israeli future-oriented pedagogy project. Discourse: Studies in the Cultural Politics of Education, 1–15.
Roden, D. (2015). Posthuman life: Philosophy at the edge of the human. Routledge.
Russell, S., & Norvig, P. (2016). Artificial intelligence: A modern approach (3rd ed.). Pearson Education Limited.
Sadowski, J. (2020). Too smart: How digital capitalism is extracting data, controlling our lives, and taking over the world. The MIT Press.
Stark, L. (2019). Facial recognition is the plutonium of AI. XRDS: Crossroads, The ACM Magazine for Students, 25(3), 50–55.
Walsh, T. (2017). It’s Alive!: Artificial Intelligence from the Logic Piano to Killer Robots. La Trobe University Press.
Watters, A. (2021). Teaching machines: The history of personalized learning. The MIT Press.
Webb, P. T., Sellar, S., & Gulson, K. N. (2020). Anticipating education: governing habits, memories and policy-futures. Learning, Media and Technology, 45(3), 284–297.
Williamson, B. (2016). Digital education governance: An introduction. European Educational Research Journal, 15(1), 3–13.
Williamson, B., Macgilchrist, F., & Potter, J. (2021). Covid-19 controversies and critical research in digital education. Learning, Media and Technology, 46(2), 117–127.
Winner, L. (1983). Technologies as forms of life. In R. S. Cohen & M. W. Wartofsky (Eds.), Epistemology, methodology, and the social sciences (pp. 249–264). Springer.
Zuboff, S. (2019). The age of surveillance capitalism: The fight for human future at the new frontier of power. Profile Books.
Gulson, K. N., Sellar, S., & Webb, P. T. (2021). Synthetic governance: On the impossibility of taming Artificial Intelligence in education. On Education. Journal for Research and Debate, 4(12).
Do you want to comment on this article? Please send your reply to email@example.com. Replies will be processed like invited contributions. This means they will be assessed according to standard criteria of quality, relevance, and civility. Please make sure to follow editorial policies and formatting guidelines.
- The EU is driving the most developed regulation concerning AI: