This term, “Artificial Intelligence” (from hereon, AI), has managed to become part of everyday language without an apparent challenge to, or questioning of, what it means. The area of meaning seems the right place in which to start, particularly in regard to ‘intelligence’. The ‘intelligence’ being referred to, although classed as ‘artificial’, is actually that of human persons. Once more, we have two conjoined terms which, merely because they’ve been used together so often, apparently have meaning. However, once we separate these, and begin to interrogate them, the certainties (of meaning) vanish.
What does it mean for something to be ‘artificial’? Shallow? A poor imitation of the ‘real’? This is a vital question when one uses ‘artificial’ in relation to ‘intelligence’ because, obviously, the term ‘intelligence’ has at best a series of disputed meanings, not some univocal, dictatorial (authoritarian?) definition which enables us to say that “X is intelligence, but Y isn’t”, although those on the right would try to convince us (insist) that they can say this. As the AI bandwagon swings onto action though, we can see this as the ultimate goal: a “technologically determined” definition of intelligence, therefore, of truth. The argument that “runs in the background” here is that of technological determinism: Because A was invented at point X, then its development into B at point Y was inevitable, despite human desire that it be otherwise. Raymond Williams, in his book Television, effectively debunks this argument for technological determinism, arguing that simply because a technology – in this case television, but we can extrapolate to all other technologies – is invented at a certain point, this does not mean that it’s destiny is contained in its origin (in much the same way as, for example, we can make this argument for human persons). A technology does nothing, it is the use that human persons put it to that is important. This is the subject of study: why is a technology, W, being used in way(s) Z? What is the motivation behind this use? Who does it benefit? Who does it control? What are the implications for those who are subjected to this technology? We can also see a link to Marshall McLuhan’s Understanding Media, a text that focuses on the form of technology (again, television) and its effects: How does it alter the ways human persons think about themselves as themselves and of their relation to others in the world? Does it cause a fundamental disassociation with previous ‘pictures’ of what it means to be a human person? Which traits of the technology do human persons take on, both consciously and unconsciously? In relation to AI, we might ask how the human person assumes machinic qualities, or attempts to mimic the apparently logical workings of AI.
These questions are, at this stage, secondary. There is, however, another point of interest in regard to our current terminology: how do we distinguish between ‘machinery’ and ‘technology’? What is the difference between the two? Machinery seems to be implicated in the idea of human control, an instrument amenable to cogs, spanners and hammers, something which cannot escape human control. Technology, on the other hand, is seen as an implacable, independent force, mythologised and mystified, a force that exists independently of the human person. Here, one can observe the essential sleight-of-hand that Williams exposes: in our contemporary technological society, the suggestion is that the computer is supremely logical, an entity (we appear to accept that these blocks of plastic, microchips etc. have actual Being) that is not subject to the factors that can ‘sway’ human conclusions: emotion; desire; context; consequences.
What we must ask here is: are these necessarily ‘faults’ in our thinking? Is technology simply a step towards achieving the dream of the Enlightenment Project – a world ruled by ‘rationality’ and ‘logic’? If so, the central question still remains: Whose rationality and whose logic? What kind of ideology guided, and still guides, these notions?Who does this ideology serve?
Under capitalism, we can identify a remarkably simplistic definition of ‘intelligence’ here: intelligence is that which increases profit. Put another way, which generates wealth. But for who? And to what end? This ‘definition’ sends us back to Heidegger’s concept of calculative thinking: how to make X happen faster and, therefore, apparently more efficiently, therefore, more profitably. AI is, taking this ‘reasoning’ into account, the ‘perfect’ creation: it enables decisions to be made without sentiment or emotion, regardless of the human cost, all the while claiming to be (technologically) ‘neutral’, the result of simple calculations that are ‘necessary’ to ensure the one and only goal that matters, profitability because it is the fulfilment of this goal that is the guarantee of all other ‘benefits’.
The suggestion is that profit is “common sense” or ‘natural’; AI legitimises this by making it appear that neutral technology favours profit, selects it as a the only ‘normal’ goal when thinking/intelligence is shorn of the emotion that detracts from being able to make sensible, rational decisions.
In this AI, particularly the ‘artificial’, is being represented as superior to ‘ordinary’ human intelligence which cannot, because of that “emotional flaw”, achieve ‘real’ conclusions: emotion will always get in the way, cause the structure of the argument, therefore, the conclusion, to be unrealistic.
Thus, AI rewrites what it means to be intelligent and what it means to think…and, by extension (what McLuhan calls ‘form’ in regard to TV), what it means to be human.
When asked, an intelligent writer of fictional literature would likely suggest that a book will write itself. During the process of writing, the writer allows a stream of objective impulses springing from the writer’s personal history, the atmosphere in the writer’s studio, memories of a previous creative process, the writer’s own DNA, the study of writings by other’s, mythology, et cetera, to flow, as it were, into an artificial subjective literary truth structured in the form of the book’s narrative. That process, when regarded at a superficial level, suggests a similarity with the data-input and programming of Artificial Intelligence. Nevertheless, in case of both human and machine, the outcome of the ‘creative’ process is artificial and subjective. Here, one might find common functionality.
Historically, humans have been willing to regard subjective literary truths as objective truisms. Writings produced by ideology-motivated authors have influenced human minds incrementally throughout history. Humans have been willing to murder each other in defence of these subjective artificial outcomes. The biblical cannon, publications by modern economists, racist pamphlets, the weather forecast, oppressive laws, declarations of war, represent for many factual and indisputable truthful facts. Even reason and logic can bring forward illusions of justice and objectivity where there is none. Indeed, if efficiency as a way towards the increase of profit becomes an ideological dogma, Artificial Intelligence will be represented as superior regardless of its subjective level of operation. And that notion will be defended with vigour by those who seek the profit.
The horse and cart used as an analogy for technology in the service of economic efficiency lays bare a truly lethal flaw. Technology (the horse) when made artificially ‘intelligent’ (the illusion of technology ’becoming an objectively realistic entity like a horse is’), will assumably be efficient in tasks that are difficult to handle for humans (pulling the heavy cart). The question is: will the technology be intelligent enough not to drive the cart into a ravine after the human driver has fallen asleep at the reigns? An objectively real horse will stop at the cliff’s edge out of self-preservation. But will the illusionary ‘objectively real horse’ do the same? And when the cliff is in sight, will the human mind become aware of the illusion – be aware of the fact that the technology is no longer following the strand of expectation? Aware of the fact that the technology was in fact never a horse? That, as Chomsky puts it, submarines can not swim?
If human awareness is so willingly deluded by subjective literary ‘truths’ (such as economic dogma, devine deterministic rule, militarism, et cetera), how will it detect that the proposed superiority of Artificial Intelligence is a fantasy in the first place?
It seems that when the boundary between objective reality and subjective artificial truth has been noticed, when the moment that dogmatic intention was set in motion is detected, and the writer of artificial narratives is located, that the moment of being aware of the artificiality reveals what it means to be human.
LikeLike