Sidebar

AI Governance Research & Commentary

In Defence of Friction.

Dr. Eric J W Orlowski, Research Fellow, AI Singapore

How AI is presented—in everything from language to user interfaces—shapes our perceptions and guides our interactions. In this piece, I suggest that by introducing moments of friction, we can unmask AI’s true scale and complexity, prompting more thoughtful and transparent engagement.

A while back, I wrote a piece on how and why language impacts the perception of artificial intelligence (AI). Specifically, the piece focused on the challenges of using flexible, metaphorical, and often unclear language when talking about and describing AI systems, and how this, in turn implicitly (and sometimes explicitly) comes to define what an AI system is, or is perceived to be. Terms like “hallucination”, and other mainstays in the AI space, anthropomorphise these models, and attribute human-like qualities to machines; machines that merely process inputs and generate statistically probable outcomes. In short, it is important to critically examine this language, because it drives the perception and understanding of AI.

I have advocated for more precise language and terminology to foster a clearer understanding of AI, what it is, its capacities, and its limitations. It is crucial to leave less to interpretation. In the words of art critic and curator Nora N. Khan,

We must discard dated and unfit linguistic and semantic structures that do not work to describe the reality of the subject within discourse of AI, AGI [artificial general intelligence] or ASI [artificial superintelligence]. As a cognitive exercise, this revisionist approach to technological language allows the general public to assess the value and goals of AI that we want as a society (2020:91).

Whoever controls the language can easily control the narrative. Unfortunately the hype-cycle has continued, and this kind of “dated and unfit” (ibid.) language continues to proliferate. We now have “reasoning” models that might soon be doing the work of PhDs (allegedly, anyway), or “agents” who will be making more and more decisions on our behalf. All the while Sam Altman time-and-time again promises us AGI (or ASI) is just around the corner.

With the apparent pace of tech-development being so high, I want to revisit this discussion, at least in part. I want to speak less about language, however, and instead take a step back and view this through a slightly wider lens: how is AI presented to its users? Naturally, this includes language, but it also includes wider discourse, as well as user interface (UI) and user experience (UX), among other design decisions.

I recently read an article by design scholar Benjamin H. Bratton (2020) that very much inspired this piece. Bratton discusses the importance of having a clear and accurate definition of what AI is before anyone can even remotely begin to make sense of what to do with the technology. Though written from a design and development perspective, I think it fair to assume that the same observation equally applies to users—especially for a technology that is still very much emergent, and where its uses are not yet clearly codified, whether through a business case, or through sociocultural understanding. Indeed, the use-case has been an ongoing discussion—what exactly are we spending a bazillion dollars on?—and Bratton (ibid.), in part, foregrounds definitions as being part of the problem.

With the above in mind, Bratton provides two ways in which AI can be understood from a macro-perspective (see table I below). Type A aligns quite comfortably with the popular perception of AI, as well as the way in which it is being pushed by large tech corporations, commentators, and pundits. In contrast, however, Type B provides a much clearer idea of what AI is, how it operates practically, and ‘cleaner’ metaphors for how to imagine the technology. It is through this tilt-shift, Bratton argues, that more fertile ground can be laid to more accurately think about how AI can be used, and perhaps (more importantly) how it ought to be used.

Type AType B
AI is almost human. It appears in animate guises: a servant, a buddy, a secret supervisor. Ideally, AI smoothly replicates human representational thought in silicon, such that it can seamlessly automate normal human tasks for normal humans. The more that AI complements human intuition, the more successful its integration with human culture. AI is (or should be) a reflection of our political economy and psychology, and so the spectrum of human-AI interaction runs from docile subordinates to active malice. Because of these correspondences, AI may one day pass human intelligence on our shared track. The solution to AI bias and harm is to ‘return’ what has been ‘lost’, making AI more comfortably, ‘naturally’ human.AI is a heterogenous collection of sensing and signal processing technologies that augment diverse complex systems, including distributed emergent cognition; more a synthetic rainforest than a robot teddy bear. While deep learning has some functional isomorphs with animalian neurologic processes, and human-AI interfaces can mimic human thought and expression by interfacial layers, AI only partially reflects and overlaps human systems. It is messy and ultimately indifferent. Bias and risk should be addressed by contestation and the explication of multiple contrasting patterns. Intelligence exists in the world in disparate forms, many of which could, in principle, be augmented by AI, and so a more durable and sustainable AI culture includes its secular de-anthropomorphisation.

Table I. From Bratton 2020:91.

Part of the challenge, however, is that the Type B definition is more difficult to imagine: AI as a distributed system, spatially, but also across adjacent technologies (servers, computers, internet cables, etc.), effectively implying AI is less a technology, and more a cluster of technology/ies. Indeed, it might be more accurate to think of AI not as an object but as what philosopher Timothy Morton calls “hyperobjects”, or “things that are massively distributed in time and space relative to humans” (2013:1). Though—and unsurprisingly—one of the core characteristics of a hyperobject is that it is difficult for humans to come to grips with; they are difficult to imagine as a whole.

This critical approach to AI fits into an overarching project to (re)imagine and (re)examine AI specificallyand technology more generally, and its role in contemporary society. Core to this project is the concept of “sociotechnical imaginaries”, defined by STS-scholar Lucy Suchman (2024), as

[D]escrib[ing] collectively imagined forms of social order as materialised by scientific and technological projects. These include aspirational futures that sustain investments in the military-industrial-academic complex.

These imaginaries give direction to the potential uses and abuses of technical systems within a social setting, and are inherently tied to what technology is imagined to be at all. Furthermore, sociotechnical imaginaries are impacted by the public discourse surrounding technology itself, and thus is both informed by society, morality, ethics, politics, religion, and many other such factors, meaning that these imaginaries are malleable, and they can change substantially across space and time (see Hui 2017 and 2020 for a deeper discussion on this).

Returning to Bratton’s definitions above, how such systems are designed, and how they are subsequently presented and described, hold substantial sway over how such imaginaries are shaped in the wider sociocultural consciousness. Looking at contemporary LLMs, for example, typically overlooked UI-elements carry substantial weight in shaping perceptions and expectation. For example, the way that text is generated, as if typed out in real-time, further functions to anthropomorphise these models, whilst the speed at which the text is generated may well be justified as ‘user-friendly’, but may equally operate as a visual metaphor for how fast the model is ‘thinking’, especially compared to how long it might take me to manually type of the same string of characters, and you have a model who is literally outperforming someone with a PhD—the Holy Grail of unclear benchmarks. Nonetheless, the image of a little mechanical entity sitting inside my computer-box spouting information at me is reinforced by these design elements.

The issue is, of course, that just as with the actual Mechanical Turk, it’s all a trick. While there might not be an actual small elf sitting inside my laptop pretending to be an ‘artificial’ intelligence (Amazon Fresh stores notwithstanding), it is still easy to be fooled by appearances. After all, I am experientially speaking to a box—a box that gives me any and all excuse to anthropomorphise it. Indeed, as one types out a prompt to one’s LLM of choice, it is easy to conceive of the interacting as ‘speaking’ to one’s computer directly.

Yet, AI is less a talking box, and more a “synthetic rainforest” (Bratton 2020:91) of server halls and distributed networks across our vast planet, connected both by land and undersea cables, run through a distributed power grid across a multitude of countries and jurisdictions. I am not speaking to my laptop as much as I am speaking to the literal planet.

Map of global undersea internet cables. These cables connect every landmass on the planet, and is perhaps one of the true (and hidden) wonders of human ingenuity.

The nature of AI as an enormous and distributed network spanning the whole planet—a hyperobject (Morton 2013)—is not a truth hidden from us by some nefarious conspiracy. After all, these hyperobjects are inherently difficult to grasp. Reinforcing this dynamic is that technology has this dubious tendency to fade into the background, or, to paraphrase Martin Heidegger (2011 [1927]), when one uses a hammer, one tends to focus on the nail.

In Heidegger’s own words, technology “withdraws” (ibid.:70) from our attention. It is remarkably easy to relate to this, too, in a practical sense. Whilst French philosopher Maurice Merleau-Ponty (2012 [1945]; see also Latour 2002) uses the example of a woman with a large feather in her hat, and who eventually learns to intuit the height of the feather when stepping through a doorway (i.e. incorporating the feather into her own sense of self), examples of this need not be quite so fanciful. More contemporary examples are when, with enough proficiency, drivers speak of “feeling” where their car is on the road, or how my keyboard disappears beneath my fingertips as I am typing this out—I focus not on my hands, nor the keys. I focus on the screen.

This phenomenon is of course contingent on proficiency in the user, as well as the technology working as intended: the car definitely becomes ‘visible’ again as you crash into a tree, and my keyboard magically ‘reappears’ if the O-key stps wrking. This kind of friction is typically considered a failure of UX design. The best design is the kind that is not noticed. Such principles certainly help explain the seemingly neutral, mundane, and ‘smooth’ design of most LLM interfaces; designed to be as unintrusive, graspable, and unthreatening as possible. This goes a long way explaining the scrolling generated text, and many models’ programmed friendliness, asking “is there anything else I can help you with?”. It also explains why the disclaimer—that the model “can make mistakes”—is small, unintrusive, and, frankly, extremely easy to ignore, or to just never notice. After all, nothing will break this immersion more than a large sign telling you that you cannot trust the machine.

Yet, it is in hijacking this principle that I find a potential to better foreground AI models and their services for what they are. Certainly, better and clearer language will also help, but I would also argue that the use of a strategic friction can be just as, if not more, effective. In short, strategically introducing friction to these systems is another tool for AI governance. Indeed, governance, whether through managing risks or improving processes, faces an additional challenge when there is a mismatch between how things are versus how they are perceived to be. Bratton’s (2020) definitions of AI paint this picture clearly: depending on how one understands AI directly impacts how they relate to bias, risk, or potential abuses of the system.

The idea of actively introducing friction into these systems is to manage the “withdrawal” (Heidegger 2011 [1927]) of the technology; to bring to focus the proverbial hammer, and thus get a better sense of what tools we are actually using, and how it is being used. Furthermore, it will also make it more obvious when AI tools are being used, in what ways, and to what extent. Actively expanding upon compliance-driven check-lists and disclaimers, into AI governance as something active, creative, and solutions-driven will be key to sustainable and effective governance strategies. Establishing and engaging with design principles for AI systems will be a good start, especially if and when it reminds us what we are actually doing.

I am also far from the only one who has been thinking about this. Unsurprisingly, designers, artists, and other folks with a proclivity for the creative, have argued that the kind of interfaces we tend to associate with LLMs—as chatbots—are the worst kind of interfaces. Though some have argued that chatbots are the worst interfaces, except for everything else that has been tried, others have written that these interfaces need to be fundamentally rethought. In the course of my research, for example, I have come across Maggie Appleton’s (still growing) collection of alternative interfaces. I am particularly partial to her daemons, whereby LLMs effectively prompt back: playing the devil’s advocate back at us as we write, or asking for clarifications throughout the text. Basically, it’s Clippy with an attitude—despite Microsoft’s attempts to avoid Co-pilot sharing the same fate as its infamous predecessor. Key to this, of course, is that whilst these counter-prompts (“Is there evidence to back this up?”) emerge organically through the writing process, and whether or not to engage with them—and how to engage with them—is down to a human operator.

Calling chatbots “the lazy solution” certainly hits the nail on the head, and this line of thinking has been more conceptually fleshed out by Amelia Wattenberger, arguing that text inputs have no affordances, and that this is an issue of “no-man’s land”, or a situation where human inputs are required, but have no impact on the outcome. She writes that “when a task requires mostly human input, the human is in control. They are the one making the key decisions and it’s clear that they’re ultimately responsible for the outcome”, before continuing,

But once we offload the majority of the work to a machine, the human is no longer in control. There’s a No man’s land where the human is still required to make decisions, but they’re not in control of the outcome. At the far end of the spectrum, users feel like machine operators: they’re just pressing buttons and the machine is doing the work. There isn’t much craft in operating a machine.

I think this skilfully sums up the problem with AI withdrawing, as I have framed it. Actively avoiding this “no-man’s land” of human-operated-machines, by not merely keeping humans in the loop, but letting us be in control is key. It is through remaining in control that new affordances emerge for users.

Though these writers and thinkers frame this very much in design terms, I think these ideas and examples nonetheless have great implications for AI governance more broadly. Reframing AI solutions away from machines that automate tasks, and towards being tools, to use Wattenberger’s framing, strategic friction is also introduced into the mix, as the limitations and complexities of these systems become more immediately noticeable. Meanwhile, the benefits go beyond finding new affordances for users. Clearer upstanding and more direct human control promotes more responsible deployment, making it a valuable tool to promote AI governance.

AI systems may be touted as seamless, invisible assistants, but the risks and responsibilities of their use are too great for us to allow them to vanish quietly into the background. Strategically designing in deliberate friction—whether through reimagined interfaces, counter-prompts, or overt reminders of what (and who) is really doing the work—creates necessary pauses that reveal AI’s hidden complexity, its limitations, and its genuine potential. By resisting the allure of frictionlessness and centring human agency, we not only guard against the hype and misdirection that so often accompany AI, but also open the door to more creative, accountable, and sustainable forms of governance. After all, recognising the tool as a tool—despite its distributed, planetary scale—empowers us to shape its role in society, rather than merely being shaped by it. In this sense, friction is not an obstacle but a crucial guidepost, helping us see AI for what it truly is and use it with intention.

  1. Bratton, Benjamin H. 2020. “Synthetic Gardens: Another design model for AI and design” in Atlas of Anomalous AI (Ben Vickers & K Allado-McDowell, eds.). London: Ignota. 91-111.
  2. Heidegger, Martin. 2011 [1927]. Being and Time. Oxford: Blackwell.
  3. Hui, Yuk. 2017. Cosmotechnics as Cosmopolitics. e-flux Journal 86. https://www.e-flux.com/journal/86/161887/cosmotechnics-as-cosmopolitics/. Last accessed 2024.04.16.
  4. Hui, Yuk. 2020. COSMOTECHNICS. Angelaki 25(4):1-2.
  5. Khan, Nora N. 2020. “Towards a Poetics of Artificial Intelligence” in Atlas of Anomalous AI (Ben Vickers & K Allado-McDowell, eds.). London: Ignota. 75-90.
  6. Latour, Bruno. 2002. Morality and Technology. Theory, Culture & Society 19(5-6):247-260.
  7. Merleau-Ponty, Maurice. 2012 [1945]. Phenomenology of Perception. London: Routledge.
  8. Morton. Timothy. 2011. Hyperobjects. Minneapolis: University of Minnesota Press.
  9. Suchman, Lucy. 2024. AI Aids the Pretense of Military “Precision”. Issues in Science and Technology. https://issues.org/ai-pretense-military-precision-suchman/.
  • Home
  • In Defence Of Friction.