The Discourse(s) of Value 1

Do we only have one? A multiplicity? Do we inhabit a society that tries to suggest that we have only one? Or do we accept the premise of postmodernity, that we have none…

‘None’ is interesting: does none posit the idea that there are some which cannot be applied? That used to apply but no longer do?It always strikes me as being similar, in structure to the atheist/god argument: an atheist is, in essence, asserting that there is something in which they do not believe. Surely the simpler thing would be to state that the question is ridiculous, and not worth thinking about – a non-issue. Is there a value in the discourse of god? Well, yes, for many there is, but that’s only because of cultural conditioning and the human person’s fear of death; the human person, well some human persons, are unable to grasp the concept of a ‘finish’. To put it bluntly, we die and that’s it. Again, you can trace this whole issue back to Plato…This isn’t to say that the concept of a god and the uses to which this concept has, and is, put isn’t fascinating. It is. For centuries, it’s been a tool whereby unsupported notions of “the good” has been instantiated without validity except the use-value to to those in power.

This use-value spills into Art through arguments that only morally good Art can be aesthetically good. Do we accept these arguments as valid or can we see them as similar to many others that cite God as a support for their perspective, not because God exists, but because the writer or makar concluded that it was too dangerous to not claim this. The idea of God has always resided in the hands of the powerful – as part of their power – therefore, to put it colloquially, were writers and makars of pervious times merely “hedging their bets”?

Art is, as I’ve argued previously, a critique of, and protest against, the status quo. Whether by design or accident, it highlights the fractures, injustices and untruths of particular times, regimes or ways of thinking. It also analyses ways of Being. Each work begins as a critique of form: of brush stroke, of musical composition, or of writing – of how this medium represents and refracts in specific ways, given the vistas and limitations of its form. Form is the makar’s first choice: what do I want to express and which medium should I choose? Usually, this choice is informed by familiarity, by an initial understanding of the discourse of the form in question. It may be the case that we find our chosen form wanting, have to push against its apparent ‘boundaries’, experiment, innovate, yet we start from a understanding of what we think the form can do, can provide. The discourse of Art (the discourses of the various media that go to make up ‘Art’) is unique in that it is not defined, not confined to a language that must be simply learnt and accepted; this discourse flows and grows (like water?) refutes absolute notions of right and wrong. It not tied to a particular epoch or ideology but to concepts of what it means to be a human person, in touch with other human persons (that comma isn’t a mistake). The discourse of Art embraces other “Humanities subjects”, not in order to act as a measure, or to invalidate them, but to engage in mutual learning and understanding. In this, it asserts its uniqueness to other discourses.

The discourse of Art makes not a claim to truth, but to perspective, often to a perspective on (a) truth (that vanishes even as we (almost) grasp it).

Unlike the discourses of, say, medicine, psychiatry or business the aim is not an exclusivity, a definitive right and wrong, a series of absolute truths (which obtain only within the structure of the discourse) operating in a teleological system. The discourse of Art is boundless, a vast open plain of endless possibilities unified by human persons. It has no overarching theme, no metanarrative, no guiding metric. It ‘relies’ simply on enthusiasm and the desire to express, to ‘speak’ (in whatever form that ‘speaking’ might take place). In this, the value of the discourse of Art lies in its ability to unify, to enthral, to reassert the human. Using a different kind of language (discourse?), Art liberates, it radiates in and through the human person while, simultaneously, allowing us (however fleetingly) to transcend our facticity. To reach beyond ourselves. Art that does this? All I can do is name-drop those who I know: J.B. Bach; Keith Jarrett; Resnais’ Last Year in Marienbad; Antonioni’s La Notte; Godard’s Weekend; Rilke’s poetry; Miles Davis’ Agharta; Beethoven; Joyce; Proust; Rothko; Brecht.

I can name my works, and you can name yours. On occasion, these will be the same – works that intrigue, challenge and explain, enabling us to learn from one another, to identify themes that we both consider important. They will not, to offer a “negative definition”, reinforce, insist and retrench. In simple political terms, the former will be left-wing, the latter right. Art is not created to reinforce bourgeois apathy, to enhance their world view as univocal.

Back to Eisenstein and Benjamin…and Derrida.

The Value of Mindfulness

Ah, ‘mindfulness’! The buzzword of the zeitgeist, especially after the whole covid thing. Apparently, such a broad ‘term’ that it encompasses any kind of “mental health issue`” you care to mention…which makes it simultaneously useful and useless. I suppose the closest we can get to a ‘definition’ is “living in the moment”, aware of only what is happening NOW. Which seems quite bizarre to me: to experience the present moment you are using the perceptions that you’ve formulated from past experiences. If you weren’t, then you would understand what ‘NOW’ meant.

This is, however, the “easy bit”. The primary question here is why the emphasis on mindfulness (it’s a multi-billion euro industry)? Why the stream of articles about different kinds of mindfulness? What that ever-growing focus on mindfulness in the workplace?

Well, we can start with that final question, which is pertinent to my overall point. The reason that managements are so interested in mindfulness is that it gives them “plausible deniability” if a worker suffers from some kind of mental health issues, as in “But look! We handed out pamphlets, sent emails, encouraging folks to take care of themselves. If X didn’t do this, we can’t be held legally responsible.”…and what we see there is the central point: you, the worker, are responsible for your own mental health. Not the company or institution, not the environment in which you work, not your living conditions, but you. In this formulation, the capitalist structure in which you are forced to live and work is blameless and, in this, we are not living in the present, but returning to the past.

In the nineteenth century, we see mental health issues (hereafter, MHI) as being genetic, that is, as being inherited. As such, MHI are passed from parents to offspring, environment plays no role. This is in keeping with nineteenth century medicine in general, and connects with the eighteenth century idea (in fact we can trace this back to Galen in regard to women) that biology is destiny. In the eighteenth century, this is clear in Art of the day, particluarly when one examines the novel: take Radcliffe’s The Mysteries of Uldolpho, for example, in which Emily’s ‘femininity” is based on her bodily reactions. The more she faints, blushes, cries, the more feminine she is (an idea of “feminine perfection”). These bodily ‘traumas’ indicate two important factors: firstly, the biological organ seen as controlling women was the heart (emotion is, in the phrase we’re all familiar with, written on the body) and, secondly, it is her body, and its affect on her mental states, that cause her to be in need of male protection (Emily is passed, after a series of ‘adventures’, from father to son-in-law. Marriage is seen, as it was in the seventeenth century, the guarantor of male inheritance). Her female weakness against males can be seen as her female strength (in that it makes her desirable to those males who would protect her). Thus, the human person, in this figuration, is complete and of themselves – there is no reference to the outside world, to the sociopolitical environment that they inhabit.

I don’t want to get pulled off course, but it is worth mentioning Samuel Richardson’s Pamela here.Although this follows the same patterns of female biology coupled to MHI, it also includes another interesting factor: class. Pamela is a ‘maidservant’, and is therefore forced to inhabit the enclosed spaces of the “big house” in which she works. These spaces become increasingly dangerous for her as she is pursued by the aristocrat, who is intent on “possessing her” (“sexual assault and rape” are more accurate terms) in precisely the same way that he possesses the other objects that he owns. As a woman, she must protect her ‘virtue’, the only bargaining chip she possesses to negotiate with those who own the means of production. Still, this is for another time…

So. The novel which illustrates this idea of what we might call “genetic inheritance” perfectly is Dickens’ Oliver Twist. In this we see (or rather hear) Oliver, who is born in the workhouse, never knowing his mother, speaking perfect RP the first time he opens his mouth – despite having spent his life surrounded by grotesque cockney caricatures. His entire bourgeois perception and manner is passed on by genetics, not the environment. In general, this is highly convenient product of capitalist ideology: the human person is a self-contained unit on whom the environment has little, if any, effect. Therefore, the working class are ‘lazy’ and ‘feckless’ because they just are, that’s how it is. Aristocrats are entitled to govern because they are…nor do the same rules and laws apply to those of different genetic make-ups (we really haven’t moved on from Plato if we accept this – look at his idea of a “ruling elite” laid out in The Republic).

The advent of Freud’s theories (to which I’ll return) and WWI in the twentieth century marked a move away from this kind of genetic theory. In one sense, WWI marks the break: it becomes apparent that environment is not only an influence on MHI, but the influence, as thousands return from the war changed, altered and shattered by their experiences. This period also marks, of course, the rise of socialism and communism (the latter for a couple of years before the advent of Stalin), the growing recognition that the supposed “ruling class” have no divine right to do so.

As the twentieth century progresses, this idea of environment being the the governing factor in constructing human behaviour gradually gains ground…until the appearance of Reagan and Thatcher in the late 70s and early 80s.

With the return of populist politics (debatable I know, but populism began long before Trump, Johnson et al) in the 80s, we also see, as a fundamental element of right-wing ideology, the notion of “genetic determinism”, in terms of both sex and class. Both Thatcher and Reagan create ‘ideals’ based on nostalgia, a harking back to a past that never existed. Once again, we see gender difference (a cultral product) wheeled out as sexual difference, together with, in Thatcher’s case, a neo-nazi representation of the working class as being idle, criminal and rebellious. What we also see is what was known as “the centre left” taking a distinct step towards the right, aded by the collapse of the Soviet Union. This collapse (cf. Fukuyama’s The End of History) is seen as the victory of capitalism over ‘communism’ (which the Soviet Union wasn’t but, again, that’s for another time). Perhaps I should replace “what we also see” with “what we experience”.

Anyway, this isn’t some kind of political history, but with the advent of Blair in the UK, neoliberalism ever-more rampant, laying the ground for the emergence of anarcho-capitalism.

A central theme of this is the return to the idea of genetic determinism which, in effect means, that you, and you alone, are responsible for what we might call “your situation”. Gradually, the connection between human persons is replaced by a contract between individuals, the idea of community s replaced by “groups of individuals with the same interests (at the moment)” and, from these, emerges the nostrum of mindfulness.

The central idea of mindfulness, in its current incarnation, is that the individual is responsible for their own mental health, and that the basis of this mental health is genetic. If you find yourself unable to cope with the stresses and pressures of life, this is because of a flaw in you; it is because you have not taken care of your ‘self’ adequately. Therefore, MHIs are your responsibility, your fault. They have nothing to do with the environment that you inhabit – MHIs are manifestations of your own personal weaknesses. This is, of course, a highly convenient “get out” clause for capitalist ideology. If you have an MHI then you, and you alone, are responsible for taking the action to ‘cure’ it – put another way, to overcome your weakness. Mental health becomes just another figuration in which market forces can decide the level…can decide on what it means to be human.

Why spend so much time on this topic? Because the rise of mindfulness, its apparent acceptance by large sections of ‘society’ (particularly those sections that control the flow of ideas), requires us to rethink moral theory. If we discount, as mindfulness does, the influence of environment, the idea of competing rationalities, the idea of responsibilites to others, the idea of a community of human persons, then our moral theorising cannot continue as it has done. Our categories become absolutes, absolutes constructed by a ruling elite whom we have no business challenging. Market forces assume the role of God in the middle ages: you are assigned a role in society, and that role is you, is all that you are, because it is ordained by market forces/God. We must simply accept our roles because, with the univocal rationality that operates in this ‘world’, they are ours.

Of course, what we are doing by engaging with mindfulness is accepting a particular discourse (which is merely a structure – one amongst many – of interpreting the world) as if it were the only way of seeing, of perceiving, the world in which we live. This is precisely the way that the ‘themes’ of capitalist ideology attempt to inscribe themselves into the metanarrative of our existence, of our being-in-the-world. They pass themselves off as being unassailable, as being “just the way things are”.

The idea, for argument’s sake, of asserting “yes, human persons in this particular system, are constructed as self-interested individuals, but we must strive to recognise and rationally overcome this” is not considered, even for a moment. Capitalist ideology fosters an idea of laisser faire: we surrender ourselves to (spurious) ideas of the ‘natural’ and “human nature”, never contemplating that these (i) are ideological constructs or, (ii), that we can challenge and reformulate these. Simply because something is ‘natural’ does not mean we have to act on it, nor does it mean (even if we accepted it) that we cannot overcome it.

The central point is that these constructs are ideological and as such they can be deconstructed, traced back to their roots.

This kind of analysis is found, academically, in philosophy, sociology and in literary theory but, as a matter of course in the Arts which, as a matter of course, encourage us to see the everyday anew.

The Value of Power

Overall, the term ‘power’ seems, to me, to be a negative one – we talk about having power over someone or thing, people taking power, abuse of power, political power and so forth. This is probably because, in capitalistic society, power is linked to competition: power gives us an advantage over others, it is a means to an end in the sense that it links to profit and personal gain.

There’s a certain irony when we’re told that politicians, of all colours, want to “give power back to the people” because what is actually meant is that this power can only be exercised in a strictly delineated way – within the rules that have been laid out for us. Therefore, we can, I think, see the exercise of power as similar to the exercise of choice: I apparently have free will, and can express that free will by making choices. That is, however, a secondary concern. The primary concern is who decides what those choices are or, put another way, who defines “free will”?

As Badiou argues in The Communist Hypothesis, ‘freedom’ in a capitalist society is freedom to own and freedom to exploit. My freedom is involved i a constant struggle with your freedom; to be an individual, and to express my freedom, I must defeat your freedom – my freedom to be free always comes at the expense of your freedom to be free and vice versa. The choices I have within this system are mere facsimiles of choice because they have already been dictated by the system – in exercising my freedom I am, knowingly or not, serving the system, maintaining it. If we interrogate each of our choices, from the trivial to the serious, they have been decided on already by the limited and narrow range from which we are allowed to select. Within capitalism, this web of freedom and choices is inextricably bound to the corresponding web of oppression and repressions so, as I’ve stated above, my expression of my freedom comes at the price of the repression of your freedom.For example, as unions have ‘won’ concessions from employers in the West, those employers have sought countries where unions are not as strong/virtually non-existent (often due to government prohibition) where they can exploit cheap labour, weak health & safety laws, continuing to make a profit for their shareholders by offering us cheap products. Another irony here is that the workers who produce our goods will never be able to buy them because of low wages – in a similar way to, say, the craftsmen who built mansions for the wealthy in the eighteenth and nineteenth century.

This idea of freedom of choice always strikes me as being akin to Art and interactivity – video games that ley you choose what to do next; plays that take votes on which direction the narrative will go in; online ‘novels’ that let you choose how to proceed (not too keen on calling them online ‘novels’, particularly when there seems to be a notion that making an online book should mean “trying to reproduce the material novel”…which obviously it shouldn’t; all that space for creativity and it’s “Look! You can flick through the pages like a real book”). All of which have the ‘choices’ inbuilt before you start: oh, you can do X, Y or Z, but only because the makar has given you those choices (so they’re not so much choices as options). It’s much the same as language and our sociopolitical system: we are born into these, so our ‘choices’ are pre-programmed – as are the (apparent) ‘values’ by which they operate. I’m not suggesting that because these systems (all subsumed in the over-arching system, capitalism) pre-exist us that we can’t recognise them for what they are and alter/change them (or even sweep them away altogether). In fact, I’d go so far as to say that the “first step” (apart from being an album by The Faces) is to recognise this structure. All one has to do then is work out how to overcome them.

We could go back to Plato and Aristotle, and identify them as being the founding philosophers of what we call “Western Civilisation”. If we think of our conceptual scheme, and everything that flows from this, as a calculus, it is possible to argue that all we are doing (even 2000+ years later) is enumerating the propositions of this calculus. However, I’d also argue that recognising this is the first step in engaging with it and thinking ways out. I’ve phrased that deliberately, “thinking ways out”, to draw attention to it. As Marx wrote, not only do the owners of the means of production own the means of production, they also control the flow of concepts and what those concepts consist of and in. The fundamental basis of capitalist ideology is to convince the majority, the proletariat, that the interests of the minority, ruling class are their interests too (false consciousness). We can see this at the moment in the wrangling over inflation: apparently, if we all get wage increases that are in line with inflation, Armageddon will be the result. Somehow, this isn’t the case for companies making ever-increasing profits. Rent control is ‘bad’. Having utilities like water, electricity and gas in public ownership is ‘bad’. “The Market” is the solution to all our problems (as was God in centuries past – and look at the similarity in language), it all provide, find the “right level”. Inflation is the cause of austerity, not the blatant greed that lies at the core of capitalism, and the subsequent disregard for the millions of lives destroyed by, and lost to, it. Go back to that idea of the web: what it allows is for the imposers of austerity, who oddly enough never suffer its effects, to claim an entirely bogus distinction between ‘direct’ and ‘indirect’ impacts. These are all directly connected: austerity requires cuts in jobs (always for those at the bottom of the heap, and to maintain profits) and public services, which means support for those who become jobless, who homeless and who are then afflicted by metal health problems (as a result of the values imposed on them by capitalist ideology) for which there are no supports. People are plunged into abject misery, which turns groups against one another, turns people against one another, turns partners against one another, turns parents against children. People turn to ‘crime’ (although it’s arguable how we define this), turn to drugs (of which alcohol is at the top of the list), turn to self-harm, turn to suicide. There is no ‘indirect’ cause here. All of these are directly connected by this web of ideas. It is, quite simply, a case of thinking it through – of identifying the system and its effects. Having done this, to still take/approve (vote for) actions that will result in condemning those in misery to yet more misery is a deliberate act. A choice for which you can be held responsible. A value (that you see your own well-being and profit as more important than other human persons’ welfare) that you display publicly. An aside here: If politicians do not recognise that what they are doing to people is wrong, then why do they deny doing whatever it is, why try to spin it?

One might argue that the greatest ‘value’ in the capitalist armoury is apathy. The power to convince people that they are powerless, that whatever they might do or say will be ignored or sidelined. Apathy is the most dangerous weapon the neoliberals/anarcho-capitalists wield. It cuts down opposition before it becomes opposition but it also has an aspect that is more insidious: it causes people to feel that have have failed, failed themselves and failed others and, thus, accept the blame for the state of society while recognising their own powerlessness.

However, in the past two decades or so, a far more sinister weapon in the fight for apathy has emerged: mindfulness.

The Moralising Self

Moral philosophy (MP)is always the one that scares people. For years, philosophy was spilt into two parts: Logic & Metaphysics (L&M); Moral Philosophy. I was advised to keep well away from MP. For that reason, in my first year, I went with L&M. Rather tedious to say the least. Logic was boring – let’s put stuff into pseudo-mathematical formula so we don’t have to deal with the emotional implications…and Metaphysics: Descartes, vaguely interesting because of the Cogito, but tedious because it stands or falls on the belief in a benevolent God. Hume was quite good – no God for him. Kant and his desperate attempts to create a world run by reason, but which also relied on a God (could there be anything more unreasonable). There must have been others, but I can’t remember. I do remember epistemology. That was fascinating.

In second year, we hit the moral philosophers. Aristotle, plus some others I’ve forgotten. Aristotle stuck with me, and I stuck with him. No mythology of the origins of morality, no “natural law” or morality being handed down by God.

With Aristotle, it’s straightforward. Person X is pointed out to you as being ‘good’ or ‘virtuous’…and you imitate them. Once your imitation becomes habit, that means you’re good too. What’s good? Whatever your group, tribe, society thinks it is. However, unlike the absolutists, this concept of ‘good’ can change; there’s room for manoeuvre. His concept has always seemed more human to me.

Oh, before I go on. I studied the Analytic school of philosophy. Apart from a couple of option courses in third and fourth year (Existentialism and The Continentals as far as I recall), those from abroad – Derrida, Marx, Sartre, Foucault, Adorno, Nietzsche – were confined to Literary Theory. Odd, but there you are.

From Aristotle, we went to Kant (the categorical imperative is useful, but too much like the Bible). Some attention was paid to Aquinas (who simply sanitises Aristotle for the Catholic Church) and to Hume (who has the bizarre idea that unless a work is morally good, it can’t be aesthetically good). The there was Rawls – take what exists and justify it. By this point I was reading, amongst others, Wittgenstein, Zola, Dostoyevsky, Sartre, Derrida, Foucault, and it seemed to me that the distinctions found in UK universities, between sociology, psychology, literature, politics etc., were entirely fabricated, in that separating these subjects disarmed them. Separating them rendered them meaningless, and trapped them. They simply became academic topics. Naive as it sounds, I wanted (and want) a philosophy that will change the world. This is what continental philosophy (even calling it that is redolent of the old “fog in channel, continent cut off” joke) was trying to do, by refusing to accept false distinctions.

Right, back to the topic. Aristotle, it seems to me, gives us a fairly accurate picture of how we ‘learn’ morality: initially, we imitate the “good person” which, when that imitation becomes habit, makes us good too. However…this only works if we exclude the human capacity for independent thought. The capacity to compare and contrast, and to draw conclusions for ourselves.While the imitation game might works when we’re young, as we get older, become teenagers, doing X because Y number of people think X is good begins to lose its shine. We question, we interrogate; we ask how X became a good, we ask how X can remain a good if it disadvantages a particular group or class of other people. We struggle with the rationality of how X can be a good if it rests on injustice. This is where the phrases “It’s just the way things are” and “It’s always been this way” surface, as apparent justifications of irrationality and injustice. For many, they accept this and “move on”…but now the central question becomes “Why do they move on? When the injustice is so glaring, how do they justify this to themselves?”. Is it the case that only I can see this injustice? No. It’s a case of apathy (learnt from parents and peer groups), plus the ideological construction of “the individual” which, in turn, reconstructs the concept of ‘injustice’ as one of ‘laziness’ and “lack of ambition”. This ideological reconstruction is far reaching: competition becomes an integral part of “the human condition”, therefore, failure to be competitive becomes an individual failing. At the same time, the placing of the individual as the foundation of society displaces the community, makes ‘community’ a category error; instead of ‘communities’ we now have “collection of individuals”, with the idea holding these together being that of the contract and “benefit to the individual”. This becomes the basic unit of this ‘society’: “Does X benefit me as an individual, or should I walk away and find greater benefits elsewhere?”. In short, the aim is to sever ties between people. The idea that I have duties, responsibilities and obligations to others in virtue of their being human is disparaged, made an indication of weakness.

So what we have is, returning to the Aristotelian formulation of “the good” is an entirely self-centred concept, that represents care for others as a weakness. The Kantian formulation has the potential to alter this, but doesn’t; as Kant relies on the existence of God, then the ultimate notion is that the ‘reward’ for living a good life (whatever this means) is received in Heaven. In fact, it becomes quite obvious that Christianity (religions in general) is an ideological construct, designed to bolster capitalist ideology per se, the suggestion being that the more miserable your situation on earth is, the greater your reward in heaven will be.

But…what of the concept of rationality in all this? Well, for a start, the Age of Enlightenment is undercut by its exponents relying on an idea – religion – that is supremely irrational. The other idea, that the human person and human society is ‘perfectable’ through rationality and science begs one particular question: does we conceptualise rationality as progressive, something that develops and expands over time? Add to this the idea that, if we take ‘happiness’ for human persons as a rational ‘end’, attempting to pass off competition as natural is irrational – in that it causes misery and contributes to the sum of injustice.

If happiness of human persons is rational, then competition and selfishness are irrational, which suggests that those who base their actions on the latter are, in moral terminology, ‘bad’. If we formulate rationality as a striving towards “the good”, and see rational thought as progressing towards this end, then it makes those who stress competition/the individual irrational or bad (even ‘evil’: if you know that X is the good/proper action in this situation but you choose Y, the bad/improper action then, in your choice of the latter, you are choosing evil which it can be argued makes you evil).

If we acknowledge that there are competing rationalities, then how do we establish a metric between these to decide which should hold sway – even which rationality is more rational?If we hold that the more rational rationality is the one that results in the greatest happiness for the greatest number of people (utilitarianism), then those who act against this must be held to account. We can also say that capitalism, which sacrifices the community to the individual is, irrational. However, fro this ‘formula’ it becomes apparent that rationality does not triumph merely in being rational, otherwise capitalism and its ‘ideas’ would be null and void. Therefore, there must be some other factor here, one that counts for more that simply being the most rational: power.

The Value of Self

This is something I intended to discuss anyway, but given Sunak’s blatant attempt to take control of the way people think of themselves as themselves, I’ll do it now.

So, in capping “low value” degrees, Sunak is dong two things: firstly, suggesting that if you are not motivated by profit and personal gain there is something ‘odd’ about you and, secondly, that if you want to pursue these “low value” degrees then you’re a ‘loser’ with no self-esteem. The more dangerous of the two is the second…Let’s look at the subtext: it essentially says that you value yourself so poorly that all you aspire to is an A&H degree. Not only that, but you’re prepared to go into public service and be supported by the “real people” who pay taxes. You obviously know that you’re not able to compete already. Your aspirations are poor, so you’re going to leave having a “real job” to others; they’ll earn loads, be able to support their family, provide all the benefits of the material world…but no, that isn’t for you, you’re ordinary.

This capping will be sold as “Look! We’re putting money into worthwile degrees that mean you too can be materially successful. Forget those other kinds of degrees; they’re for the lazy to look after the lazy (and feckless).”

So, back to the beginning. How is the self formed? What influences the formation of self? Can the self every be said to be a finished project?

Well, starting with the last question: Heidegger argues that we should not talk about ‘being’, we should talk about ‘becoming’ because the human person is in what we might call a constant state of flux. Ideas, concepts, notions of the self are constantly being created, modified, discarded. This idea becomes clearer if we refer to Nietzsche and the aesthetics of self-creation; Nietzsche sees the self as a work of art, that is constantly being made and re-made. The only terminal point for either Heidegger or Nietzsche is death.

There has been a tendency in philosophy, to see the self as being created in a kind of “splendid isolation”, discounting media and others as influences. Not exactly a realistic picture of how the self is formed. We choose from a variety of sources in recent years this ‘bank’ of sources has expanded, not limited to but including literature, TV, music, social media, parents and peer groups. When we consider the latter two, we must assess the first four’s impact. Is there a kind of circularity here? Do we, in constructing ourselves, reflect or refract these influences? We might also ask, in regard to reflection and refraction, how has this changed over past number of years? Are we now more prone to reflection?

When examining how we get, and develop, our moral values, can we still claim that these come from our parents? As the Church has lost its position as moral arbiter (because many of its exponents were found to have feet of proverbial clay), so have parents. Nowadays, there are other, more influential ‘bodies’ involved: TV (briefly: TV is about drama. Thus, what kind of morality does it inculcate in spectators? Being ‘kind’ and ‘thoughtful’ is not the stuff of good drama. Couple this with TV being bedded into ‘realistic’ settings, populated by relatable characters, and TV morality achieves a supposed ‘realism’ that it does not deserve); Music (the predominant form of music is to present the listener/spectator with a one-to-one format, whereby there is straightforward identification of singer with listener/spectator – what the singer expresses are feelings, sensations and reactions felt by the listener/spectator – who often has the desire to ‘be’ the singer); Film (this is designed to fulfil the desires of the spectator. We can say that film involves wish-fulfilment on the part of the spectator. A film character enables the spectator to think “I wish I could act in way X”. Again, most film narratives are enacted in a “realistic environment” which can cause the spectator to mistake a fictional environment for the one they inhabit); social media (in social media, the preceding forms of media meet. The spectator/user becomes the central character in a fiction of their own devising. They craft an image (or images) in the likeness of how they wish to be perceived (circularity again: where does this perception comes from? How is it formulated?). This image is then mistaken for a real person by others, and subsequently influences their self-crafting.

If we take a step back for a moment, and look at what we can call the history of the self, it seems that we have come full circle. The concept of the self ‘begins’ in the Renaissance with the Copernican Revolution. It si revealed that the earth is not the centre of the universe, it is merely another planet in orbit around the sun. The human person is displaced, as is the idea that God ordains one’s position in society. We might say that this period marks the beginning of the death of God. Once the absolute power of this God is questioned, we can see people beginning to fashion themselves (not in isolation – where God once held absolute power, other institutions rush in to fill the gaps). This self-fashioning takes the form of a persona devised to dominate public life. At this stage, there is no concept of a “private self”. Ot isn’t until the turn of the nineteenth century that the split between public and private self appears, most notably in Wordsworth’s “Introduction to The Lyrical Ballads“. What emerges from this is the now traditional idea of a public and private self, with the latter considered to be more authentic, more sincere. We can trace this back to Descartes’ split between body and mind, and to the work of the German writers Goethe and Schiller. This concept comes to dominate (in some respects, it still does), particularly the idea that the private self is the ‘true’ self, the public self being simply what one presents to the outside world. This public-facing self is seen as constructed with what one wants others to see as its guiding principle.

At the end of the nineteenth century we have the appearnce of Sigmund Freud. His theory of the unconscious is based on explaining our behaviours both to ourselves and the outside world. I’ll come back to Freud, so suffice to say for the moment: unacceptable behaviours and unfulfilled desires are surpressed into the unconscious, where they remain.These behaviours and desires battle for release, and can manifest themselves in overt behaviours and desires which appear to be unexplainable in terms other than referring to the unconscious as a cause. What’s interesting here is that, while Freud posits only the idea of an personal unconscious, Jung goes further, in that he adds the idea of a universal unconscious to the personal. This can also be “held responsible” for certain behaviours and desires. Perhaps we might see TV or social media as modern manifestations of this universal unconscious? TV can be said to cause us to behave in particular ways which, if we took the time to analyse, we would find originate in TV programmes.

Once we get to the end of the twentieth century and the and others all play an increasing beginning of the twenty first, social media has become a permanent presence in our lives.Facebook, Instagram, Tik Tok and others all play a significant role in our lives. However, what these have also done is move us away from what was the dominant idea during the twentieth century, of a public and private self, back to the idea of a pulic self only. We create the selves that we want others to see (judge us by) online.; this is not confined to our psychic selves, but through instagram and photoshop, our physical selves can be crafted into the versions we desire.

Is it possible to talk about an authentic self anymore? There are so many persuasive influences in the contemporary world that it becomes increasingly difficult to see where the influence stops and the self begins. Privacy, we’re told, is a thing of the past – identifying an older generation, out of touch with modernity.

So. Wth all these influences and no concept of privacy (apparently), how does the human person make choices? And, more importantly, are these free choices?

The Value of Consistency

Or we could swing this round, “The Consistency of Value”. I’ve already discussed the idea of consistency as a “power play”, a mechanism that forces the disempowered into preordained channels, yielding preordained results.

We can also discuss the idea of value in relation to what we are told is the human desire/need for security and safety (a film is said to fulfil this need by giving the spectator a beginning, middle and an end – something they can never attain in “real life”). However, can we think of something we want, or need, as a value? The word has an attached “moral aspect” in that when we talk about a value, our disposition is “I think that this is worthwhile, you should think so too” or “This value distinguishes us from X”. So once again, we have a competitive notion attempting to slide in unnoticed. Values become part of “culture wars”: we are a democratic society with an elected leader, they are fascist state, led by a dictator. Values facilitate separation and tribal identification (take flags, for example, a kind of short hand for “this is what we believe in”, or “this indicates my membership of this group”), therefore, one might say that they facilitate aggression between groups, whether this be county, national or international.

There is also the question of whether the values that we suppose that we ‘have’ (in some sense or other) are chosen by us. One can argue that, in many cases, we simply ‘adopt’ values, without giving thought to what they mean and their implications. Nor do we interrogate the values of our respective nation-states.

On a personal, microlevel, we u se what we class as our values to make statements about ourselves: our beliefs, our moral worth (based on the values we espouse). I’m not going to be diverted by this, but we should be aware of the distinction between the values one professes and the actions one engages in. This wasn’t a problem for the ancients because they drew no distinction between mind and body. One was judged on one’s actions in the world – the values you possessed were ‘distilled’ from empirical data. Only with the advent of Descartes, and the separation between mind and body, does thought begin to take priority. This becomes decidedly more important with the advent of mass media. What, for example, are we to make of someone who, on seeing TV coverage of a Famine, professes sorrow at the sight, and anger at the inaction in alleviating this?

Anyway, personal values. Where do we get these from? Are they static? I think the answer to the second question is “Obviously not”. To argue that values, once formulated, remain static is to deny the influence of others, and of mass media. There’s a few things to be said about the ways in which media influence value-formation, but I’ll come back to that.

Where do personal values begin? Well, usually at home, then in school, then? (it used to be Church, but those days are gone). So, initially, our values are not ours; they come from interaction with parents and grandparents, from the narratives we’ve read as infants, from the schools we attend. So our idea of value-formation is embedded in our consciousness from an early age. This is done by comparison, but not by us, by our parents. We’re told that X is good and that Y is bad…from this we learn to generalise, to create categories which are, at first, little more than guesses: if X is good then Z is good too. So we learn by association. Of course, what this imposed system deals in is binary oppositions, so the framework for reproduction is laid.

One of the headlines today (15/07/23) is “Sunak puts cap on ‘low-value’ degrees”. He means degrees that don’t result in students getting professional jobs that pay well. So, that would be A&H them. Of course, the headline was never going to read “Degrees that indicate motivation by something other than profit to be capped” or “Greed degrees to be given the go-ahead”. The Tories are using every trick they know to enforce the idea that everyone is an individual motivated simply by money; every so often something like this breaks cover – a move so despicable it takes your breath away.

But it does illustrates the point I was making in the paragraph prior to the one above. Binary opposition, profit and individualism: ideological factors that the right combine to shore up their argument that capitalist society is ‘natural’.

What does ‘natural’ mean?

The Discourse of Imaginative Thinking

So, if we are to avoid the mistake of allowing the discourse of IT to generate its own terms of reference, how are we to proceed? By linking it to those fields which have traditionally been seen as indicators of value: epistemology; rationality; logic. However, as I remarked earlier, these fields act as ‘guardians’ for the conceptual scheme of Western thought. They also guarantee a (the?) fundamental element of Western engagement: judgement. In Western society, comparison, therefore, judgement is woven into the fabric of society. Comparison generates competition, a .propensity towards binary opposition: X compared to Y = X is better than Y. From this basic formula, we can extrapolate to the capitalist model: If X is better than Y, then I need X. However, after a period of time, X needs to be replaced by A, because X has been surpassed by A (whether it really has or not – the basis of consumerism). Repeat ad infinitum. Initially applying to objects, this formula has gradually been applied to society in general – to personal relationships, whereby people become simple objects, who we utilise to achieve specific aims. Put another way, we now treat relationships as ‘things’ that we can profit from – the central concern becomes “How does this relationship benefit me?”

If we were to integrate IT into this system, as outlined above, then it would become part of the system it purports to reject. Therefore, IT rejects two fundamentals: (i) Profit as motivation and, (ii), comparison/judgement as basic component.

Of course, floating around underneath all this is yet another fundamental: consistency. How many times do we hear this cited? “You need to be consistent…”, “But, to be consistent, you must do…”. Western thought has been terrorised by the demand for consistency. It is responsible for the notion that human persons crave ‘safety’ and ‘security’. Yet which comes first? The demand for consistency acts as a mechanism of conformity. If an argument is inconsistent, it can be dismissed – seen as illogical and irrational. However, the question here becomes “What is mean by ‘irrational’?” In other words, whose rationality are you saying must be prioritised , in comparison to which, this is irrational?

If I say that IT manifests its ideas differently in different people in different ways, then I am automatically seen as presenting a system that cannot be systematised, that is not systematic in its approach, that one cannot define by referring to a set of rules that operate systematically, leading people to draw consistent conclusions from similar ‘cases’. There is, however, one ‘requirement’ for IT: the pursuit of the good for others. This ‘requirement’, as it remains undefined, leads to the expansion of the human person’s capacity for empathy. Nor does it become predictable, given that there is no demand for consistency., therefore, one can behave in completely different ways presented with an identical set of circumstances. Following from this, there is no, one that can be defined by making X number of way of defining “the good”. The good, therefore, is not a fixed, stable category that can be refined by making X number of comparisons. Thus, because the good is only attribute of IT, those who engage in IT expand and explore the concept of good, and the ways in which it permeates all facets of life, both personal and professional.

There is a problem here…because people of my generation are so conditioned to thinking in binary oppositions, breaking out of this habit can be (is) very difficult. I can already hear the questions piling up in my own mind: If you’re going to talk about good and what it is, then you have to talk about bad and what that is (generated by the binary mind) – much the same kind of question posed when discussing the influence of Art, in that insistence on logic and consistency, “Well, if you’re going to argue that Art can have a benevolent influence, then you must consider Art as a malevolent influence.” You have to be consistent, logical. We see the same kind of argument in regard to opinions, say, on Art. If all opinions are valuable, then all opinions can equally considered to be valueless…but this is logical if and only if all those expressing an opinion have the same knowledge of a particular area. Consistency requires that we must first judge the abilities of those expressing an opinion. But then, who judges the judgers, and who judges the judges who judge the judgers? Ad infinitum.

I suppose I’m getting at two things here. Firstly, in the “chain of consistency” we eventually halt because it would take too long and, secondly, following on from this, we can see the demand for consistency for what it is: an act of power…”I am in a position to demand consistency from you, in that I expect you to follow my rules. My demand that you follow these rules, and my definition of consistency, is so that you must obey me. I dictate the rules of the game, so the game is mine; when you play it, I already know the outcome(s).”

We brings us full circle, right back to how A&H are being excluded from TUs. The rule of the game is profit.

Valuing Imaginative Thinking

At this point, I should be doing the usual “philosophy by numbers” game; that is, I should be investigating how IT relates to epistemology, to rationality, to morals etc. However, it doesn’t take much to see the obvious flaw in this: a way of perceiving is initially developed BUT it must then be reconciled with pre-existing (philosophical) categories which, in effect, render it useless. Any insights/power it may have had are lost as the makar of this way of perception attempts to link it into (traditional) ways of formulating rationality, knoweldge and morality. These “established categories” (would ‘establishment’ be a more accurate choice of word here?) act as a kind of final line of defence; indeed, we’ve seen this over the past decade in regard to ‘truth’ and “fake news”: what emerged was a distinctoon between “rational truth” and “emotional truth”, in that what most of us would call ‘truth’, a category which exists independently of the human person’s desires or wants (maths is the most obvious example) began to be dispalced by “emotional truth”, the idea that X is true because I want it to be true, I desire it to be true, regardless of there being no evidence or ‘logic’ (which is a whole other argument in itself) underlying my belief. Quite simply, “I want X to be true, therefore, X is true”. To put it in traditional, rational terms, I have no evidence or proof to support my claim to truth other than my desire that X be true. This is decidedly not what most of us have been trained to do when investigating the truth of a proposition – See? Even the language I’m using to describe this is “pre-ordained”. The suggestion is that the grounds for making a rational case for truth must exist independently of me as a human person. They must not include a human element. Thus, when calculating the worth of something and its relation to the defining characteristic of profit, we can make (or so we’re told) no allowance for the human (cost). For example, if Arts degrees do not turn a profit, they must be scrapped, despite my questioning whether this is the right decision or arguing that we cannot apply the same criteria to Arts degrees as we apply to business degrees. According to rationality, to be fair we must apply the same criteria of judgement across the board; this is the ultimate sleight of hand. It makes no rational sense to argue that the same criteria be used for different kinds of degree…because they are different kinds of degree – they set out to do different kinds of thing, to achieve different ends. If it were the case that Arts degrees had colonised the concept of rationality, then business degrees would start to disappear.

My point here, to put it bluntly, is that the “one size fits all” approach does not work, is characterised by irrationality…is logically contradictory which, in turn, makes a mockery of democracy.

Capitalist democracy is a sham. The shell of this capitalist democracy will be preserved, provided it does not get in the way of profit. Should this occur, the mask falls and capitalist brutalism returns. Take the current Writers Strike in the USA; a memo has been leaked which states, unashamedly, that there will be no offering of terms until writers begin to lose their homes, their partners, their children, and are unable to pay their bills. Once this has taken place, the employers can enter ‘negotiations’ from a position of strength. All pretence of reasonable, democratic practice is sacrificed on the altar of profit. The conclusion to be reached is not new: Capitalism and Morality are incompatible. Life in the former is a vicious struggle against others, using any tactic available. In the latter it’s about talking to other people, about concern for their welfare, about respect.

Of course, philosophy is about clarity. In pursuit of this, there is a tendency to examine the history of philosophy, looking for connections with, and between, past writers. Take the role that reason plays in aesthetics. Both Kant and Hume cite reason as a means to escape from the emotional (hardly a surprise, given that Enlightenment philosophers use reason as a kind of “magic bullet” when pursuing knowledge). The emotional is regarded as a kind of ‘pollutant’, skewing “proper judgement”. This attitude has fed into our society, whereby emotion is now generally regarded as a weakness (a reason to dismiss others’ arguments), a kind of betrayal of self. Look at the ways in which this format is built into both patriarchial and colonial thought: in the former, reason is the preserve of the male, a hierarchical distinction that holds that the male control of reason is the reason for their superiority. In the latter, this pattern is repeated; the colonised are at the mercy of their emotions, thus, they are feminised, seen as a group over whom control must he exercised for their own good.

In a similar, but more convoluted way, we can identify this use of reason in aesthetics. Firstly, there tends to be an assumption as to what reason is – a kind of negative definition too. We identify what reason isn’t. Secondly, reason is class-based – defined as a quality possessed in virtue of one’s position (which dispenses with the need to give reasons as to why one is in control of reason).

For example, Kant and Hume both make an appeal to reason, by trying to identify what it isn’t, and by defining emotion. In Hume’s case, we also see the introduction of the persistence-over-time argument. For example, if X is still recognised as an artefact in 100 years’ time then it qualifies. However, what Hume does not offer is an explanation as to why this might be the case – what ideological purpose does X serve?

In Kant’s case, what underlies his system, whether of morals or of aesthetics, is the idea of a benevolent god. So his system of reason is built on a supremely unreasonable premise: the existence of a character that is supernatural.

How does this connect to the main subject of this blog? Well, I’m trying to show how discourses operate by controlling the terms of the debate, by creating ideas of success and failure on the grounds that their system is objective…when it’s not. Every system contains, within itself, the criteria of success and failure, attempting to pass these off as objective when they have, in fact, been generated by this system. Analyses’ will reveal that ‘objectivity’ is itself a product of the respective system that claims to possess it.

The Value of Imaginative Thinking II

(ii) Do we (should we) value Imaginative Thinking? (IT from hereon)

Do we need to be able to define something in order to judge its value? If IT varies from person to person, must we, therefore, say it cannot be valued (have a value placed upon it) because there is no identifiable “common denominator”? In other words, do we need some kind of metric in order to assess IT?

That isn’t a rhetorical question (nor a kind of “Aquinas trick” – ask the question then answer it brilliantly). If we insist on a metric/metrics, then this would contradict IT. However, one kind of ‘measurement’ does occur to me: how does this IT contribute to human empathy? Does it encourage or exploit? I’m deliberately avoiding phrase like “human happiness” or “make the world a better place here” because each of those invites judgement and definition – we simply end up with a long discussion regarding what ‘happiness’ or ‘better’ mean. What I would argue though is that IT necessarily includes thinking of the human person as a subject rather than an object – there is no inherent idea of the human person as someone to be used in IT.

Obviously, what I’m arguing here is that IT is a crucial ‘component’ of artistic creation (in any Art). This may indicate that for X to be defined as Art, then it must contribute to/expand/explain what it means to be a human subject – it must increase the understanding of the spectator, expand their empathy fpr others, identify injustice, ‘move’ them closer to engaging their own IT (which is not to suggest that the spectator merely deploys a copy of someone else’s IT – my IT is my own, it cannot be someone else’s. My IT may possess similarities to that of others, but it cannot be one and the same).

The human person’s IT reaches out to the world, identifying injustice and exploitation from their unique perspective as someone born into that world. It identifies the ideological, seeing unity in uniqueness. A crucial aspect of IT here is the way in which we think of ‘uniquesness’: not as something which separates me from others, makes them “fair game” for exploitation, but as human persons simultaneously like and unlike me and, therefore, ‘worthy'(?) of my care and respect. My IT is guided by this in engaging with the world around me, of which I am part. I am also able to engage with myself as a “foreign subject” in this world, to see myself as (an)other.

Just to try to rephrase this: ‘difference’ in this figuration becomes a reason for unity, for connectedness to others, NOT as we’re encouraged to see it through capitalist ideology, which is as a reason for competition and fear, therefore, as the motivation for aggression. This takes us ‘back’ to one of the unifying ‘properties’ of Art, whereby when we encounter a work we think “I too have felt like this/have this perception”, reactions we then share with others in the act of criticism.

Thus, the value of IT is that it creates bonds with others, enabling us to see our humanity reflected (and refracted) in Art. These reactions can be both similar and different at one and the same time, thus, we are illustrating to both ourselves and others that we are simultaneously similar and unique, without this realisation becoming the basis for competition and aggression.

How does this occur? Art exposes capitalist ideology, identifying the fractures and deliberate tensions that capitalism creates insisting on profit as the motivation for human action, and trying to pass this off as ‘natural’. Art engages our ability for IT in a positive, communal way, while encouraging us to develop this ability.

Nor does IT, I think, produce the fractures that Art as a function of capitalism does. Regarding Art as a product, capitalism introduces “levels of division”, in that the consumption of Art becomes a symbol of status – an indicator of one’s level of wealth, one’s superiority to others. Hence, the (artificial) division between ‘high’ and ‘low’ culture, the tying of this division to class which, in turn, gives rise to the notion that “this work isn’t for me”. In this system, Art becomes a means of exclusion, of promoting elitism and competition.

The Value of Imaginative Thinking

This is where it gets more difficult; if this were a book, I’d have to work everything out beforehand, but it isn’t, it’s a blog, so the characteristics are different. Anyway, I’m not getting b(l)ogged down in that.

Two questions: (i) What is the value of Imaginative Thinking? (henceforth, IT) and, (ii), Do we (should we?) value Imaginative Thinking?

(i) In discussing IT, is the case that that one starts to attribute certain qualities or attributes to IT in order to establish value? This would seem to defeat the purpose; it would suggest that, unless this particular pattern and trajectory is revealed/followed, then what X is engaged in is not IT. Two things are striking here: firstly, this would follow the usual “negative definition” notion, in that, we start by identifying what IT isn’t in order to have an idea of what it is. Secondly, what this kind of argument does is essentially say “Unless your IT does/includes X, Y and Z, then it is not IT”, so what I’m doing is defining what IT is…which contradicts itself. IT is necessarily a different kind of thing for each human person. There are no necessary and sufficient conditions that one can point to in order to say “Yes, this is IT”. What I’m trying to do here is suggest that IT is inclusive rather than exclusive. Put another way, IT bears a direct relation to the the unique personality of the human person. This, however, seems to give too wide – too inclusive – a field. It leaves us with nothing to discuss: each human person can define their thinking as IT without fear of contradiction. Therefore, we need to go back a step to ideas of the human: are there essential characteristics of being human? It is here that we can make progress.

Compare two general notions of moral thinking. On the one hand, we have Western Philosophy, based on Plato and Aristotle. The “starting point” here seems to be that our moral behaviour must be guided/instructed by rules (in order for it to count as “moral behaviour”). The assumption is made that the human person is ‘naturally’ selfish, self-centred, with no sense of connection to community, therefore, must be forced into developing such a sense. This, generally, facilitates a rather cynical perspective on what it means to be human.

On the other hand, we have the Chinese view, what I’d call the Confucian view (although my understanding of this may be wrong – my understanding is self-read). This is the idea that human persons are good, have a sense of their connectedness with community, and that their moral behaviour is guided by their desire to find opportunities to express this good. This is to say that moral behaviour is that behaviour which provides a sense of good, defined as connection with our community.

Now, whilst I realise this is probably a simplistic reading, it does illustrate the fundamental difference between the two: the Western approach posits the idea of rules to ensure “good behaviour”, whereas the Chinese does not. Put another way, the latter does not start from a position of dis/mistrust (which, interestingly, is something built into business practice).

Where does this attitude come from? Why is it so entrenched in Western society that people argue that it is ‘natural’, part of “human nature”? Well, we might begin by looking at the structure of our narratives; myths, legends and folktales for example. These revolve around the idea of single, central character who is, in some sense, opposed by others. Narratives often begin with an act of deception or betrayal. Narratives are designed as warnings, particularly for women – of the sort “Do as you’re told or things will go badly for you”. This is, of course, when young men are not being warned of the deceptive ‘nature’ of women. In the standard format, we accompany the ‘hero’ on their journey, to their moment of triumph against others. Conflicts arise for a variety of reasons: spite; vendettas; revenge; economic gain; perceived slurs. From myth, legend and folktale, we learn the structure of narrative, a structure we then build into the stories we tell others, and into the stories we tell ourselves about ourselves. Thus, from an early age conflict with others is an expectation, a foregone conclusion of living as part of a community.

Capitalism formalises this into an ideology – admittedly with a few additions. However, this is fundamentally the “business model” on which capitalism trades. Unity and co-operation are not ‘natural’, the individual must strive against these to ‘succeed’.

What’s fascinating is that we are prepared to dismiss myths, legends and folktales as products of superstition, devised to explain events – quite often natural phenomena – that occurred before science was sufficiently developed. Anthropologists will link ideas and events in previous ‘civilisations’ in order to explain this or that myth or legend. They examine the structures and sociopolitical dynamics that gave rise to X, Y or Z.

This is what Machiavelli does in regard to religion and its ideological position in supporting aristocratic rule (an idea which is later refined by Marx). He (correctly) identifies the notion of a ‘God’ as being absurd, but an extremely useful sleight of hand in maintaining rulers’ power: There is an ‘entity’, unseen and lacking any physical evidence of existence, that can know all you think, and see all your actions, that will then sit in judgement on you (the ways of this judgement, and its punishments, being remarkably human in their execution).

What is puzzling is why this kind of analysis is not applied to the role of narrative in capitalist ideology: the veneration of the individual, the positing of competition as being ‘natural’, conflict with others as being part of “human nature”. Is it because this indoctrination begins at such an early age that we cannot see it for what it is?