Artificial Intelligence 2

Perhaps, instead of calling this new ideological product AI, we should refer to it as “machinic thinking”; I’m using ‘machinic’ rather than ‘machine’ deliberately because this term seems, to me, to capture the ways in which this technology is designed, and promoted, to imply the idea of “better than”, not “different to”. The end goal of machinic thinking (from hereon, MT) is to convince us that our ways of thinking, of devising solutions, of using compassion and empathy to guide our thoughts, is inferior to the shiny, (apparently) unemotional world of digital technology. One can argue here that AI’s aim is to formulate a world without morality, without conscience, without imagination – competitive individualism becomes the norm, dispensing with ideas of responsibility and obligation to other human persons in virtue of their being human.

The prevalence of the computer (in its various guises – laptop, phone etc.) in contemporary society encourages the analogy of the human person as machine. This is hardly surprising; we can see this occurring in Descartes when the human body was likened to a mechanical entity. In a similar vein, the dominant image of the human and society in the eighteenth century is that of the clock, in the nineteenth century that of the tree and its branches, in the twentieth century the machine, culminating in the likening of the human person, and the ‘processes’ involved in being human, to the computer. Is there, for example, a great deal of difference between Plato’s aviary theory of knowledge and that of knowledge as background software?

However, what is glaring here is the sleight of hand involved in our thinking about AI (call it that as the ‘accepted’ term). We are not being encouraged to think of the computer as human, but of the human as computer. ‘Intelligence’ on this model becomes simple data processing. The human person must become a machine, putting aside conscience, ethical reasoning and context. ‘Intelligence’ in AI is taken as meaning ‘thinking’. But what does it mean to think? Is it simply relating data fragments?

The human person is encouraged to conform themselves to the machine because the machine (apparently) fulfils the goal of non-emotional decision-making. The machine is more efficient because it is uncontaminated by emotion, that fundamental human flaw. The more the human person uses the machine the more they are surreptitiously drawn into becoming machinic, the more the machinic stretches out into all aspects of being human.

There are two central questions here: firstly, how do we define ‘thinking’? (Related: how do we do thinking?) and, secondly, how do we define ‘understanding’? (Related: in what sense(s) can one be said to understand something?)

The comparison, as we allow ourselves to be subjected, whether consciously or unconsciously, by the computer, is with the human mind (not the brain) as machine rather than machine as human mind. This is because the computer, the machine, can do so much that we ‘forget’ it is simply a machine – a tool that we use; as Heidegger described it, a “Being with” technology. Yet the computer, and digital technology in general, has now become so ubiquitous that we forget it is a machine. It has become a tool that uses us, that provides us with a form (the form) with which to encounter the world, what Heidegger called “Being for” technology. As we unconsciously conform ourselves internally to the machine, the machine then forms our relations to the external world, to people and objects.

This is not to suggest that “the machine” is ideologically ‘innocent’, that once invented this machine was technologically determined to develop in this way. The computer as machine serves capitalism and the quest for profit as did the factory machines of the eighteenth and nineteenth centuries. And, in the same way, the human person fulfils the role of serving and maintaining the machine, whilst simultaneously becoming psychologically part of the machine. Thus, the (capitalist) machine controls both the means of production and the flow of ideas in society. The worker literally becomes a servant of the machinery (so no change) and, in their social interactions, reflects the depersonalised, non-emotional form of the machine. However, this depersonalised, non-emotional form is represented as aspirational, paradigmatic of the “successful individual” in society – the notion being that to be successful one must discard sentiment and responsibility for others. The machine is the perfect individual: able to take decisions without thought for, or understanding of, human persons. MT is represented as “the way to Be”, held up as the apex of ‘progress’.

Now, in all of this, one must also consider neuroscience, one of its aims being to map the human brain. Yet this is problematic: we are capable of mapping electrical impulses and pathways in the brain. We can, apparently, map certain thoughts to certain electrical impulses but, one is left with a question similar to Gilbert Ryle’s in his text, The Concept of Mind: shown the Oxford colleges, the visitor asks “Where is the university?”…shown the map of electrical impulses in the brain, one asks “Where is the mind?” or “Where is the thought?” or “Where is consciousness?”…or “Where is understanding?”

I have seen these kinds of questions referred to as ‘complex’ in an array of writings on AI(MT) yet, in these, the word seems to stand in for a multiplicity of unanswered questions. Take ‘understanding’ (N.B. Here we can see the relevance of Wittgenstein’s concept of “language games”, detailed in his Philosophical Investigations. An aside: could one read what AI(MT) does as playing games, in that one can play a game without understanding or being aware of all the rules that govern playing the game?

When one thinks, or has ‘intelligence’, doesn’t this involve understanding? Or we might say, an understanding? The qualifier is needed because of the myriad ways in which one can be said to understand something – by those who themselves understand that something. However, if I understand a something, X, differently to you, is either of us willing to credit the other with (an) understanding X? If your understanding differs from mine, we have to compromise, continue our discussions of X until such time as we identify “common ground” (remaining aware here of Sartre’s “bad faith” [deception of the self by the self] and Marx’s “false consciousness” [deception by ideology]). In regard to AI(MT), “an understanding” is presented as “the understanding”: the shiny, unbiased, neutral ‘truth’ of the machine reached without taking into account all that flawed thinking that ‘distracts’ the human person from reaching ‘proper’ conclusions. An understanding becomes the understanding, technologically determined (we already have an example of this in our universities: roundtable discussion of end-of-semester student results has been replaced by an algorithm which apparently ensures ‘transparency’…but who programmed the algorithm? And what has been lost by this abjuration to technology?

But back to the question of ‘understanding’. What do we mean when we say that we understand X? Is it possible to arrive at a useful definition?

Yes (well, I would say that, otherwise I wouldn’t have asked the question). Understanding comprises of context, specificity and moral thinking. This last involves empathy and compassion. One must possess the ability to confine oneself, although it is essential to have an concept of oneself as oneself, to “the background” when engaged in moral thinking – to balance subject thought with an idea of the good for others. This is generally known, I think, as a conscience: the ability to reflect on one’s actions as they impinge on self and others; to formulate theories of action and to extrapolate how these will affect self and others; to change one’s behaviour, or alter one’s thought, based on consideration of past actions and thought; to postulate an idea of “the good” then strive to achieve this, even though this idea is subject to constant modification through the processes of experience and of thought. In short, to engage in a perpetual process of self-analysis, comparing ‘facts’, theories and experiences. Back to Heidegger: one is involved in a constant process of Becoming.

Published by ashleyg60

Lecturer in the Department of Creative Media, Munster Technological University, Kerry Campus, Tralee, Co Kerry Ireland. This site expresses my personal opinions only. It does not reflect the views of MTU in any way. Interests: Philosophies of Digital Technologies; Aesthetics; Epistemology; Film; Narrative; Theatre; TV.

Leave a comment