Truth as “a function of power” has a nice ring to it—for some people. The rise of a cynical relativism that declares that truth doesn’t exist as anything “objective”, i.e. outside our perceptions of it, has parallels in the relativisation of morality. If “good” cannot exist unless, as Hamlet once put it, “thinking makes it so”, then good is relative to human beings, and we know from experience that different historical times/different circumstances “construct” the thinking of different people, so if it’s thinking that constructs the idea of the good then objectivity of morals is dead in the water. But does it follow that even if we accept this questionable sort of reasoning about “good” we should accept that truth must go the same way?

Nietzsche actually didn’t say that truth was a function of power[i], but he could see how possession or control of what was seen as the truth could provide the power to dominate how people think about reality: declaring some accounts of reality true and others false are power plays in larger struggles over defining reality. But from here do we inevitably go to “there is no truth”? The end-point of Orwell’s Nineteen Eighty-Four was, depending on the edition, either “2 + 2 = 5” or “2 + 2 = ?” and the two slightly different conclusions to draw are, respectively, “absolute power determines what we agree to call reality” or “absolute power makes it impossible to know what is reality”. In neither case is it inevitable that the absolute power decides what is true, only that the absolute power controls people’s access to the truth. Orwell wasn’t postmodern enough to make the external reality in the novel subject to the same “suspicion” that is engendered by the “unreliable narrators” and frequent shifts in perspective that call into question all perspectives, which are the staple of some 21st century literary fiction. The only “suspicion” of all perspectives is the suspicion engendered in the characters in the novel by the controlling power. The “truth” that is a “function of power” is only the pseudo-truth of Big Brother that is ground into the denizens of Airstrip One so comprehensively that they lose all faith in there being any other reality, or any “real” truth.

There is a logical fallacy in the reasoning that conflates “truth” with the dictates of power. The PoMo “suspicion” that would relativise truth out of existence works fine at a first-person level: what I perceive to be true is, tautologically, what I perceive to be true. The fact that I can’t get “outside” my perceptions to somehow “perceive” the noumena rather than the phenomena has tempted many modern thinkers to apply (subjective) first-person criteria to (objective) third-person phenomena and conflate the two realms, thus relativising the objective realm by sleight-of-hand.

While we’re on Nietzsche, his idea of “Perspectivism” is most apposite to talk of truth. The notion that he put forward, in a muddled way perhaps, was that a broader perspective gave one a closer approximation of truth. There’s an obvious dependence on the simple visual analogy that goes: observer A sees one person, observer B sees two, one of whom from A’s viewpoint is hidden behind the other.  A says there is one person, B says there are two… being familiar with visual perspective I will judge (“objectively”) that A is wrong and B is right. But Nietzsche’s melange of “perspectivist” theorising was immediately reduced, by various thinkers, to “relativism”. He didn’t endorse a full-blown relativism, but some of his less-considered remarks can be construed that way. A C Graham did a very careful analysis of Nietzsche’s “Perspectivism” and found that it did not reduce to relativism:

Throughout much of his work Nietzsche is faithful to the visual analogy, and assumes that the wider the range of perspectives from which one views the better one knows, the nearer one approaches the kind of objectivity he recognises…[ii]

Graham then quotes various passages from Nietzsche’s works that either support or undermine the non-relativist theme. But he believes that there is one “rescuable” argument in Nietzsche for a Perspectivism that does not reduce to relativism:

It is inherent in the visual analogy that perspectives are related, not to the persisting and self-centered viewpoints of individuals or communities, but to the vantage-points towards which, as with spatial positions, they direct themselves in order to get the most informative view of the scene. Nietzsche is always searching for the unnoticed perspective from which what he has himself said is revealed to be inadequate; and this constant requestioning is for him not a plunge into skepticism but a strengthening and enriching of knowledge.[iii] [My itals]

Graham allows that Nietzsche’s discussions of truth and knowledge are “very varied”:

Nietzsche may be seen as ranging between two poles; at one he rejects their very possibility, at the other he dismisses the all-or-nothing truths affirmed from single perspectives only to insist on the more-or-less of truth in multi-perspectival views.[iv]

It is when he unduly privileges the Ubermensch that Nietzsche leaves himself open to charges of relativism; this and the “will to power” form a very important strand of his thought, and it is this aspect that gives rise to distortions like “Truth is a function of power”. However, in Ecce Homo he talks of having “an eye beyond all merely local, merely nationally conditioned perspectives; it is not difficult for me to be a ‘good European’ ”—a superior perspective to the German historians’, who have “utterly lost the great perspective for the course and values of culture…they have actually proscribed this great perspective. One must first be ‘German’ and have ‘race’, then one can decide about all values and disvalues…—one determines them.”[v] Here, Nietzsche disavows the “truth as a function of power” perspective that was later formative of Nazi ideology.

We can talk of “telling truth to power” and as long as the Bear, or the Panda, or the Eagle doesn’t just obliterate us, because any of them can, we can be sanguine about the relationship of truth to power. But first we must consider where power really lies in our post-millennial new world…

Globalisation, turbo-capitalism and exponential growth in digital technologies have created massive inequality, socioeconomic distress and the destruction of communities, and this has seen a profound shift in the locus of power, which has altered the pattern of winners and losers from the system—financial and technocratic elites operating at a global level are appropriating the power that was once invested in liberal democratic nation states. Technology is a relentlessly dynamic force that recognises no limits other than the limits of possibility (consider how the “limits of possibility” have changed in just the last two decades…). A technocracy is by nature totalitarian, the rule of a self-perpetuating system with no controlling centre and therefore no bearers of political responsibility and no real accountability. Liberal order, in what is left of “democracy” before it is subsumed in plutocratic totalitarianism, is reactive to technology, but it cannot control it: consider the recent Facebook man’s stonewalling of a parliamentary “grilling” as merely the first failure in a losing battle. Michael Hanby[vi] sees technocratic totalitarianism as “post-political” and suggests that post-political absolutism may inflame the desire for a political absolutism that promises to restore control, perhaps partly explaining the Trump phenomenon.

The unstoppability of technology, with the limits of the possible heading exponentially upwards, has led to the notions of desirability, morality and truth being perceived as old-fashioned and irrelevant beside the beckoning glitter of the rewards from the monetisation and the commodification of everything in the reign of quantity that consumer capitalism has ushered in. Truth is not so much a function of power, but the new power coming up—rich techno-power—is in essence totalitarian and anti-human, and, importantly, it relies on the trashing of the concept of truth in order to continue expanding like a cancer. And why is this? Well… the whole concept of “telling truth to power” conceives of truth as an absolute, and totalitarian states do not countenance any absolutes other than themselves, but our situation is more complex than that, and has deep historical roots.

Close analysis of the building-blocks of modern academic philosophy reveals various ideological constraints on what they can say about truth, knowledge and meaning, which conspire to render today’s philosophy inadequate to fully explore the concept of truth. Grossly oversimplified, three general ways to truth have had broad acceptance in Western philosophy over its history: “rationalist” contributions, which suggest that truth is amenable to reason; “empiricist”, or experience-based contributions (the modern materialist/scientific worldview is associated with the empirical approach); and modern and “postmodern” approaches to truth after Kant, which have called into radical question the traditional approaches and in many cases have called into question the notion of truth itself. “Modern” and “postmodern” philosophy in the Anglosphere grew out of the empiricist tradition—the Anglo-American “broadly analytic tradition”, as some call it, owes much to the 17th Century rise of empiricism as a philosophy, whilst “postmodern” strands owe as much to Continental neo-rationalist ideas and also to the social sciences.

While “philosophy bakes no bread” is a truism, there is a trickle-down effect from an academy that is so obviously opposed to the notions of “traditional wisdom” and the Learning from History that used to be part of an ongoing intellectual endeavour. With neo-liberal capitalism and the marketisation and commodification of everything, education was unlikely to escape, and there was a (predictable in hindsight) synergy between the need to price everything and sell everything and the sort of thinking that became, increasingly, “privileged” in the universities: i.e. there is no truth, no certainty; the “grand narratives” of the Western tradition are rationalisations that legitimise racism, sexism and persecution of minorities; the way forward is to fight for the individual rights of the oppressed… This led to the identity politics and narcissism which took the heat off the systemic wrongs and emasculated any genuine opposition to the political/social/technological system that was being established. The “postmodern suspicion”, ostensibly of all grand narratives, becomes a sort of paranoid obsession, and it permeates the modern world. One of the early results of this trickle-down suspicion in the world at large is the destabilisation of a culture of truth as a strong concept, which allows the “newspeak” merchants, the plutocrats and the opportunists, aided by various forms of mass media, to behave with cavalier disregard for anything but self-interest and the accumulation of wealth and power.

Parallel with the rise in narcissism and focus on the individual, which has been exacerbated by the overwhelming surrender to digital technology and “social” media that is presently moronising the population, is the naturalising of the market as the universal arbiter. The market allows only quantity as a yardstick (quality is an inconvenient judge of the value of anything measured in saleable units…), and a concomitant of this is that everything has its price; anything outside the market—things with no monetary value—are seen as valueless (love, compassion, community feeling, spirituality, etc.), and thus human life is degraded. The opportunity to “like” anything and everything renders everything “measurable” and for sale…

Measurable, quantifiable, numerable, countable—the C17th rise of science, and the parallel rise of Empiricist philosophy, saw the enshrining in the Western mind of the logical part of human thinking. It severely impeded academic philosophy in the UK and The US (the “broadly analytic tradition”) by denying it access to what might be 80 to 90% of human thinking.  A. C. Graham makes it abundantly clear that “analytic reasoning” is but a small part of our thinking:

Logicality itself is only one of the varieties, and not necessarily the most important for judging someone intelligent.   Reason in the narrow sense can presume too much on being the capacity which distinguishes human from animal; an exclusively logical mind, if such is conceivable, would be less than animal, logical operations being the human activity most easily duplicated by a computer.[vii]

And back to Nietzsche: his “Perspectivism” can be nicely aligned with Graham’s version of how we think, viz: the “constant requestioning” that is for Nietzsche a “strengthening and enriching of knowledge” allows all relevant perspectives to be taken into account:

There is only a perspective seeing, only a perspective “knowing”; and the more affects we allow to speak about one thing, the more eyes, different eyes, we can use to observe one thing, the more complete will our “concept” of this thing, our “objectivity” be.[viii]

And the main “affects” that analytic philosophy denies itself are those available via the arts:

The arts can develop, clarify, and intensify awareness at any or every level, sharpening sense impressions, vivifying imagination, waking to unnoticed similarities, loosing correlation from conventional schemes, educating the incipient simulation by which we understand persons from within – analysing too, but never like philosophy and science uprooting the logical form from its bedding in other kinds of thinking. [ix]

The separation of the “rational” from the “emotional”, and the concomitant diminution of perspective, has been a trend since the C17th which has become chronic in the C21st. The “bubbles” and “echo chambers” of the internet reinforce tribal beliefs and actively work against reasoned debate, or even exposure to thinking outside the thought-bubble straitjacket. There are parallels with the phenomenon, just prior to wars, of the vilification of “the enemy” as other than human; many stories attest to the difficulty in killing a fellow human being if he/she is perceived as such. (Orwell’s vignette of the Spanish Civil war—he found he could not shoot a running man who was obviously trying to hold his trousers up after losing his belt to some exigency—illustrates just this human trait. Soldiers are routinely brutalised to minimise its effect, and propaganda emphasises the “otherness” of the enemy.) In the same way, exposure to only “our” ideas makes others’ ideas alien. The concomitant narrowing of the human thinking process is cause for alarm—Facebook addiction in hoi polloi is one thing, but a generalised closing-off of the rational element in our thinking is dangerous: it is exactly what is required before the propaganda victory that creates “the enemy”…

The emotional/non-rational/irrational wave of C21st radical fundamentalism is to be expected as an equal and opposite reaction to the “scientistic” trend in analytic philosophy and the hyper-rational successes of technology. The trend to “spiritualism”, Eastern religious and yogic practices, etc., but without any but superficial grounding in traditional ideas that might give them depth, is also part of the push-back against hyper-analytic thinking. The internet is awash with “alternatives” to the modern hyper-rational world, most alas modelled on the neo-liberal consumerist blueprint and competing for attention in the Babel of shouting, over-simplified messages online.

The hyper-rational pole is represented in this environment too: dip not too far into Google Scholar and people are discussing the “quantified self” (there is even a “Quantified Self movement”). They speak of a “qualified self”, constructed by applying the same Big Data algorithms and devices to track and alter mood, emotion, etc. Individuals will have “an increasingly intimate relationship with data as it mediates the experience of reality”—“creating qualitative feedback loops for behaviour change”… (Guénon’s “reign of quantity” might seem old hat.)

That these “poles” are further “polarising” would appear to be a verifiable fact. The fact that they have been poles apart for some time is verifiable, too: C P Snow’s “Two Cultures” (1959) is well-enough known, but C S Lewis’ The Abolition of Man (1943) is most interesting to revisit in the light of our present juxtapositioning of the rational/irrational poles.    Lewis wrote his three-chapter polemic in response to a textbook for senior high-school students that sought to “modernise” their thinking. What he railed against was its unconscious acceptance of a bias of the then-current Oxford philosophy that denigrated emotion, in the sense that it sundered fact from value. The is/ought impasse was cutting-edge then, and the well-meaning English teachers who wrote the book were utterly submerged in that zeitgeist. Lewis speaks in a language alien to the C21st, but he put his finger on the same sort of polarisation we are seeing today in a slightly altered form. He was defending the objectivity of values at a time when radical subjectivism, emotivism and the “linguistic turn” were coming into the ascendant—and he correctly identified what a distorted focus on an exclusively rational analysis would lead to. Because, as David Stove puts it:

It does not follow…because no reason can be given to believe p, that it is unreasonable to believe p, or that belief in p is groundless Unless some propositions were known directly or without benefit of reasons for believing them, none could be known indirectly or by means of reasons.[x]

Or, as Coleridge put it, more poetically,

From the indemonstrable flows the sap, that circulates through every branch and spray of the demonstration.[xi]

As Lewis saw it, in the teachers writing the textbook and in the philosophical mainstream at the time:

Their extreme rationalism, by ‘seeing through’ all ‘rational’ motives, leaves them creatures of wholly irrational behaviour. If you will not obey the Tao, … obedience to impulse (and therefore, in the long run, to mere ‘nature’) is the only course left open.[xii]

Practically all traditions throughout recorded history show a striking convergence in the sorts of beliefs, behaviours, rules and attitudes that comprise the right way to live—the “perennial philosophy” has many strands, many different applications in many different cultures and traditions, but, mutatis mutandis, all these strands comprise the same Way. In the first chapter of The Abolition of Man, Lewis outlines what he is going to mean by “the Tao”:

In early Hinduism that conduct in men which can be called good consists in conformity to, or almost participation in, the Rta … Righteousness, correctness, order, the Rta, is constantly identified with satya or truth, correspondence to reality.  Plato said that the Good was ‘beyond existence’… The Chinese also speak of a great thing (the greatest thing) called the Tao.  … It is Nature, it is the Way, the Road…. It is also the Way which every man should tread in … conforming all activities to that great exemplar [for Lewis, Christ].

This conception in all its forms, Platonic, Aristotelian, Stoic, Christian and Oriental alike, I shall henceforth refer to for brevity simply as ‘the Tao’.  … What is common to them all is something we cannot neglect.  It is the doctrine of objective value, the belief that certain values are really true, and others really false, to the kind of thing the universe is and the kind of things we are….[it is to] recognize a quality which demands a certain response from us whether we make it or not.[xiii]


Lewis saw the polarisation as being “follow the Tao”/return to a state of “nature”.  “Follow the Tao” can be simplified for our purposes (but not by undue distortion) as a paraphrase of Stove’s “unless some propositions are known directly or without benefit of reasons for believing them, none could be known indirectly or by means of reasons”; if you reject these universal values that cannot be derived from reason alone, you can have no grounds for judgement—all your reasoning floats without foundation:

My point is that those who stand outside all judgements of value cannot have any ground for preferring one of their own impulses to another except the emotional strength of that impulse…

For without the judgement ‘Benevolence is good’ — that is, without re-entering the Tao — they can have no ground for promoting or stabilizing these impulses rather than any others. By the logic of their position they must just take their impulses as they come, from chance. And Chance here means Nature.[xiv]

And “return to a state of ‘nature’” can also be aligned with our polarities.  By “nature”, Lewis does not mean Tao as “Nature … the Way, the Road…”. He says it (nature) has varying meanings, and proceeds to define it via its opposites (“the Civil, the Human, the Spiritual, and the Supernatural”). From “nature” come our basic drives, impulses and emotions. For Lewis, the intellect (reason) alone is inadequate to keep these in check. In a nod to Plato, he uses a three-part notion of the human as comprised of head, heart and gut: the intellect, the sentiment (the “heart”) and the aforesaid basic drives. By following the Tao, the heart can be educated, emotions can be civilised, and basic drives can be kept in check. A very traditional notion of the human, and one that has been desperately unfashionable for almost a century, but the relativisation of values has had certain effects—of which Lewis was perhaps not too dimly prescient in 1943.

There is irony in the current polarisation: of the hyper-rational techno-elite world of Google, Amazon, et. al. and the hot, primitive emotions moiling on the world wide web. Science, and scientistic and empiricist/materialist thought—in the academy, in business, in planning, in politics (“it’s the economy, stupid!”)—has moved away from the non-rational, the emotive, that which cannot be pinned down in propositions that can be manipulated and tested. This provides a rich matrix for two sorts of reaction: the anti-rational rise of populism seen everywhere, and the hyper-rational rise of techno control. On the surface these seem diametrically opposed, but they are two sides of the one phenomenon. Two sides of the same coin are not the “full two bob” in themselves, of course, and one of them is being duped—some hoi polloi who fulminate on twitter, who vote for the likes of Trump or just about anyone out of the current sad crop, who have knee-jerk reactions to everything from individually-crafted inducements to buy to traffic-rage outside their kids’ school, no longer accept the old “values” of community, democracy, decency, etc., which have been “shown” to be fabricated by a system that is unfair— one that has been exploded by the academy… these hoi polloi are surrendering, via the technology they have become addicted to, to the purely rational/analytic control of elite very rich groups.

And the crushing irony is that the rational/irrational dichotomy is a false one. If, as A. C. Graham has demonstrated, 80+% of thought is “non-rational” from the point of view of analytic reasoning, the two sides are part of the same coin: Graham explores this notion in a number of essays in Unreason Within Reason: essays on the outskirts of rationality. There is no “rational” way to prove just about any answer to any of the “big” questions, and in fact there is no “rational” way to prove that a “logical” argument actually “proves” its conclusion. As David Stove said: “The greatest logician in the world cannot explain, any more than the layman can, why ‘All swans are black and Abe is a swan’ entails ‘Abe is black’.”   What are our reasons for believing:

  1. a)  M a P      [all M’s are P’s]
  2. b)  S a M       [all S’s are M’s]
  3. c) \S a P      [all S’s are P’s]   ?

Graham, a sinologist and not an analytic philosopher, but one with a strong empiricist bent, had an acute and unconditioned mind to bring to the “problems” of rationality, consciousness and the fact/value dichotomy that have perplexed the broadly analytic tradition for many years. After showing, with a rather delightful allusion to Pavlov, that most of our thinking is correlative, he demonstrates that we have an emotional response to facts as well as a (probably later) rational one. In fact, most of our thinking is correlative; as Graham asserts:


…all analysis has its starting-points in the pre-logical underground of thought – in concepts born from spontaneous correlations, which may be discredited if the conclusions drawn from them are contradictory or refuted by observation, but can be replaced only by a spontaneous correlative switch; and from spontaneous motivations which are to be evaluated by the degree of awareness of oneself and of the objects to which one finds oneself responding. (p208 my emphases)

And it is this “spontaneous correlative switch” that happens when we are open to wider and wider perspectives which causes us to “change our minds”—no-one is persuaded by rational argument to adopt a different view: it is only when we have “decided” pre-rationally to allow ourselves to be moved in a particular direction, in the light of a wider array of facts about it, that we choose a “position”.

A most important observation of Graham’s, in the present context, is that:

No mode of thinking, poetic, mythic, mystical, whatever you please, is to be called irrational merely because it is pre-logical, but it is irrational to accept it without having a test which it satisfies. Rationality is intelligence excusing none of its varieties from logical tests. (p15, my italics)

And my thesis is that the great divide, which probably started during the Enlightenment or before, is a traducing of the human, which has always had two at least (and pace Plato, possibly three) components, which, when properly aligned in the “right” proportions, has resulted in the Tao, or the right way to live. A purely analytic-rational focus denies important parts of being human and important ways of arriving at truths—the juggernaut of technology is a totalitarian force that by “uprooting the logical form from its bedding in other kinds of thinking”, to use Graham’s words, has taken the locus of power out of human hands. And the only sorts of truths it recognises in this supposedly “post-truth” era are the thin truths, the truths that are instrumental in consolidating its absolute power.[xv]

The pressing question for our age is how to reconcile the two divided parts of our humanity, somehow to heal the rift, and thus to show that C. P. Snow’s “two cultures” and C. S. Lewis’ Abolition of Man are not accurate predictions of our near future. At present, “telling truth to power” is a fraught undertaking, because those who might do so are hampered by the legacy of the last three hundred-odd years and its insistence on truth’s only being available via the narrow analytic-rational route, which is the operating system of the main technological and financial power blocs in today’s world. A recognition that human rationality is much broader and more varied than logical operations might be a good place to start on the “reconciliation”.

Tom McWilliam

[i] “All things are subject to interpretation. Whichever interpretation prevails at a given time is a function of power and not truth” are said to be Nietzsche’s actual words… [my italics] [  ]

[ii] Graham, A. C. Unreason Within Reason: essays on the outskirts of rationality, Open Court, Illinois, 1992, p30

[iii] Ibid. p31

[iv] Loc. Cit.

[v] Ibid. pp30-1

[vi] Hanby, M. “A More Perfect Absolutism” in First Things Oct. 2016, p26

[vii] Graham, Op. Cit. pp15-16 (my italics)

[viii] Nietzsche On the Genealogy of Morals, in Graham Op. Cit. p30

[ix] Graham, Op. Cit. p215 (my italics)

[x] Stove, D. The Rationality of Induction, OUP, UK, 1986, p180 (my italics)

[xi]  Coleridge, S. T. (ed. I. A. Richards) The Portable Coleridge, Penguin/Viking, N. Y.

[xii] C.S. Lewis.  1944.  The Abolition of Man, Ch. 3

[xiii] Ibid. Ch. 1

[xiv] Ibid. Ch. 3

[xv] What might seem apposite here, perhaps, is Aphorism 4 of Beyond Good and Evil: “TO RECOGNISE UNTRUTH AS A CONDITION OF LIFE; that is certainly to impugn the traditional ideas of value in a dangerous manner, and a philosophy which ventures to do so, has thereby alone placed itself beyond good and evil.”

The Legacy of Enlightenment Philosophy

When we think of “Hipster café culture” we can be dismissive of something easily caricatured as vacuous, narcissistic and intellectually not up to much. What a contrast, we might think, with the coffee-houses, salons and public meeting-places of the C18th. The intellectual ferment of the “Century of philosophy”, we feel, was something qualitatively different from the life of the mind in the C21st. And yet we are told by many that The Enlightenment ushered in the early modern world; The Enlightenment, in the words of a year 11 essay on the subject,

…was a way of thinking that focused on the betterment of humanity by using logic and reason rather than irrationality and superstition. It was a way of thinking that showed scepticism in the face of religion, challenged the inequality between the kings and their people, and tried to establish a sound system of ethics.[1]

But “The Enlightenment”, scare quotes and all, is perceived as a contested site by the modern academy. Before expanding on this, let us consider the enlightenment period and its legacies from a variety of different perspectives. Nietzsche always looked for the additional, wider perspective that would show his as inadequate, but he didn’t dissolve into relativist despair at the prospect—this constant re-questioning led not to scepticism but to a strengthening and enriching of knowledge. We might do well to follow his example.

From one perspective, perhaps the broadest one available, “enlightenment”, “seeing the light” and the “light of reason” have always been associated with progress towards a clearer grasp of reality and truth. Labelling one short period in history “The Enlightenment” may seem offensively self-congratulatory and serve to negatively characterise other periods, like the “Dark Ages” before it, as unenlightened. The tradition of “Wisdom” (the demeaning inverted commas conferred only in the C20th), or the Perennial Philosophy, has long held that certain truths about reality have always been available to humans, and that various traditions, most often religious ones, have approached these same truths via different paths. Perennialism suggests that, rather than a syncretic amalgamation of these traditions in a New-Age pot-pourri, what is essential in all of them can be seen—on a higher level—as a way to true enlightenment.

A radically different way of viewing the intellectual ferment of the late C17th and the C18th is the “Enlightenment-bashing” postmodern characterisation of it as just a flimsy intellectual cover for Europe’s aggressive colonialization of most of the rest of the world: reason’s claim to universality and bringing the light to the benighted is nicely exposed in a Cook cartoon of 30 or more years ago, with Malcolm Fraser, apparently struggling under a huge box labelled “The White Man’s Burden”, waving off a black man offering to help him carry it with, “There’s nothing in it”. Since then, of course, that “burden” has been read as the burden of oppression, one that imposes an obliterating white “reason” on the kaleidoscopic variety of other cultures, previously seen as benighted.

“Enlightenment values”—if we take the clichéd ones in the Year-11 essay above—are like Motherhood statements for most people in today’s world. Who would cavil with “reason” as against “superstition”, or social reform that sought to reduce inequalities? A standard characterisation of The Enlightenment (capital ‘T’, capital ‘E’) has it that what this century of ferment brought into being was akin to a maturing of humanity, a time when we threw off shackles of tradition and superstition and started to think for ourselves as a species. Certainly it was a time of radical change: the scientific “revolution” of the centuries before, which threw into turmoil most of the received ideas about how the universe formed and functioned, and caused huge reaction and major re-thinking on the part of religious philosophers whilst inviting sceptical speculation about the accepted order, was one cause of “disruption”. Another disrupting factor was the “age of exploration” from mid-C15th to the C17th, which by bombarding Europe with new discoveries considerably widened the “possibility field” and legitimated thinking outside the usual parameters. It has been noted that the two periods which have contributed most to Western thought—C5th and C4th BCE Athens and C17th and C18th CE Europe —have been periods of upheaval and radical change in the circumstances of their respective populations. It could be that we are in another such period, and have much to learn from the Greeks and from the Enlightenment…

So, to look closer at this “Enlightenment”—examining some typical statements of major Enlightenment thinkers might allow us to discern certain themes and emphases.

In France, Voltaire:

“Prejudices are what fools use for reason”

“Those who can make you believe absurdities can make you commit atrocities”

“The truths of religion are never so well understood as by those who have lost their reason”

…and Montesquieu:

“The tyranny of a prince in an oligarchy is not so dangerous to the public welfare as the apathy of a citizen in a democracy”

“To become truly great, one has to stand with people, not above them”


“Man is born free and everywhere he is in chains”

“Conscience is the voice of the soul; the passions are the voice of the body.”


“Man will never be free until the last king is strangled with the entrails of the last priest”

“Only passions, great passions can elevate the soul to great things”

One might be forgiven for thinking an anti-authority, anti-clerical temper pervaded the French enlightenment. It is only when we probe a little deeper than the caricature that quite fundamental differences in outlook can be discerned. Nietzsche, whose aphoristic and rhetorical work might be seen as “uneven”, could nevertheless come up with the sharpest insights.  He spoke of the “serenely elitist Voltaire” and the “enviously plebeian Rousseau”, saying Voltaire was an “unequivocal top-down moderniser” whereas Rousseau saw that the “Enlightenment project of willed, abstract social reform” could “cause deracination, self-hatred and vindictive rage”. Perhaps in the light of the Terror after the French Revolution we could accede to Nietzsche’s observation, whether Rousseau grasped all the implications or not.

And in England, the “Enlightenment” took a slightly different course: what was an anti-authority ferment in France played out differently in England’s constitutional monarchy (Voltaire was “exiled” to England for a time, and took on many of Locke’s empiricist ideas). There was a widely-held view, in what has become the “broadly analytic tradition” in modern Anglo-American philosophy, that the most important thing that was happening in England from the early C17th through the C18th was the gathering ascendancy of “British Empiricism” as a world-view.

The 17th and 18th century development of the British Empiricist School was closely paralleled by the exponential growth of the experimental sciences and their discovery of an identity distinct from pure mathematics.   Russell’s notion of a “reciprocal causation”, between the circumstances people live in and what they think, helps explain this synergistic flourishing.   But, of course, the empiricist “attitude” is not new with the birth of modern science: a curiosity giving scope to observation and experimentation has always been at least a small part of being human and has been responsible for enormous changes in what we know of the world.

An important perspective on the Enlightenment period sees it through the lens of the “Rationalist/Empiricist” dichotomy. The simplistic “England was Empiricist and the Continent was Rationalist” is just that: simplistic and virtually useless as an explanation of what was going on. David Hume, whom we will get back to later as one of the most “reasonable” voices of the Enlightenment, said that the central philosophical debate of his day was waged between “speculative atheists” and “religious philosophers”. In the England of the late C17th it was possible to offer a publicly persuasive “confutation of atheism” – the title of the first series of Boyle Lectures, by Richard Bentley in 1692, which gave natural theology a prominent place in intellectual discourse. The lectures delivered over the period 1692-1732 were widely regarded as the most significant public demonstration of the “reasonableness” of Christianity in the early modern period, characterised by that era’s growing emphasis upon rationalism and its increasing suspicion of ecclesiastical authority. Significant empiricist philosophers like Locke and Berkeley, and scientists of the stature of Newton, were advocates of some of the then current (rationalist) arguments that sought to prove the existence of god. And on the continent, quite radical empiricist voices punctuated the generally rationalist tone: Voltaire, as mentioned above, was greatly influenced by Locke’s empiricism, adopting his refutation of innate ideas to criticise Descartes, and making fun of the famous rationalist philosopher Leibniz in Candide; Condillac took Locke’s empiricism to a much more extreme position in his Traité des sensations, and de la Mettrie the extreme materialist wrote Man a Machine. On the other hand, J S Bach, who flourished in the first half of the C18th, might be read as presenting a mathematically precise musical argument for faith arrived at by reason.

It seems that once we characterise an era by its prevailing ideology, we are drawn to view the entire period—with its seething mass of conflicting ideas, interests, personalities, politics and events—as best understood from that perspective alone. Once embarked on a theme, our thoughts move effortlessly among ideas marked by similarity and contiguity. Confirmation bias lights the way through deeper and more thorough research in the construction of a forming thesis. David Hume pointed out this “habit of mind”, and it was, I believe, one of his most important insights. But its importance was also the most overlooked. To understand why this was so it will be useful to survey several perspectives on Hume, which stridently assert different versions of Hume’s place in the “Enlightenment” milieu. Firstly, the “British Empiricism” perspective—Hume was third in the triumvirate of Locke/Berkeley/Hume.

He is often described as representing the “dead end” of empirical philosophy, of arriving at such “shocking conclusions” that philosophy has been reeling ever since. Bertrand Russell talks of the “self-refutation of rationality”, sees Hume’s scepticism as being “inescapable for an empiricist”, and expresses the “hope that something less sceptical than Hume’s system may be discoverable.” Hume took the empiricist project to its logical conclusion. What we can know for certain by the application of empirical principles is strictly limited. Locke said as much, Berkeley halved what Locke believed we could know, and Hume put paid to most of the rest. There is at least a surface parallel with Socrates’ insistence on human ignorance needing to be understood before knowledge is possible, but there seems to be a difference in what is possible after this “extent of human ignorance” is grasped.

The consequences of Hume’s philosophy are no less than the death of all rationalistic metaphysics and ethics, the acceptance of a purely descriptive role for natural science, and the inclusion of human thought and action as natural processes within the province of biology and psychology.[2]

And how did he do this?   Ostensibly by the application of empirical methods. The “British Empiricists” are seen as primarily concerned to provide an account of the philosophical foundations of human knowledge in general, and of modern science in particular. There is a definite ideological overreach in Macnabb’s summation of Hume’s legacy, but the same ideology forms an important strand running through the broadly analytic “tradition” that is still the dominant philosophical milieu in the Anglosphere, postmodern relativist incursions notwithstanding. The gospel of British Empiricism has been parodied along the lines of:

“Let there be light!” and there was light, and He called it “renaissance”, but saw that there was still darkness, so He took a rib of the renaissance with which to make greater light. But the rib broke, and there arose two false lights, one Bacon, meaning “Father of the British Empiricists” and one Descartes, meaning “Father of the Continental Rationalists”. And the Creator saw that they should war, so he divided them by a great gulf, until there should arise in the east a great philosopher who shall be unlike them and yet like them, who will bring true light and unite them. And thus it was that Bacon begat Hobbes, and Hobbes begat Locke, and Locke begat Berkeley, and Berkeley begat Hume. And thus it was that Descartes begat Spinoza, and Spinoza begat Leibniz, and Leibniz begat Wolff. And then it was that there arose the great sage of Konigsberg, the great ImmanueI, Immanuel Kant, who, though neither empiricist nor rationalist, was like unto both. He it was who combined the eye of the scientist with the mind of the mathematician. And this too the creator saw, and he saw that it was good, and he sent goodly men and scholars true to tell the story wherever men should henceforth gather to speak of sages past.[3]

And of course this history of early modern philosophy has been called into serious question, but it still has explanatory power, and leaves a void to fill if it is rejected outright.

Another perspective claims to fill that void. Rather than seeing Hume as the apotheosis of British Empiricism, we should situate him in his historical, social and political context. If, as he suggested, the main philosophical debates of the time were between “Religious Philosophers” and “Speculative Atheists”, it would be fair to assume that Hume had a position on this debate. And we do encounter religion in the bulk of his philosophical writings. The perspective that Hume’s was a “philosophy of irreligion” suffers from the usual pejorative connotations of that term. The Australian OED gives “indifference or hostility to religion”, and a moment’s thought shows that this is a strong disjunction: you cannot be indifferent and hostile at the same time. The proponents of the “irreligious” perspective seem on close reading to equivocate between the two denotations, or if they make a definite case for “indifferent” they allow the pejorative connotations around that word free rein, perhaps occluding the standard “having no partiality for or against”. Hume was hostile to the robust theism in the major religions, especially Christianity, because he saw that people believed it to legitimise the various atrocities that have been associated with religious wars, crusades and the like; however, he never espoused the aggressive and pugnacious atheism of someone like, say, Dawkins in our era. Hume apparently once said to Baron d’Holbach, “I’ve never even met an atheist”. Like the “British Empiricism” perspective, the “Irreligious” one is useful but partial.

A further way of viewing Hume is that throughout the twentieth century and up to the present time Hume’s philosophy has generally been understood in terms of two core themes, scepticism and naturalism. The obvious difficulty is how these two themes are related to each other and which one represents the “real” Hume. How to reconcile Hume’s radical scepticism with his efforts to advance a “science of man”—a tension that pervades Hume’s entire philosophy and is most apparent in his Treatise, has exercised many good minds. It has given rise to technical arguments about sets and sub-sets of Scepticism; arguments about whether Hume actually believed that the Pyrrhonian end-point was unavoidable; questions about how a “moderate” scepticism could be advanced after an “obvious” acknowledgement of Pyrrhonianism; Hume as pursuing an essentially destructive or negative philosophical program, the principal aim of which is to show that our “common sense beliefs” (e.g. in causality, the external world, the self, and so on) lack any foundation in reason and cannot be justified. This sceptical reading of Hume’s philosophy dates back to its early reception, especially by two of Hume’s most influential Scottish critics, Thomas Reid and James Beattie. Viewed this way, Hume’s reputation is well summed-up by Bertrand Russell:

David Hume is one of the most important among philosophers, because he developed to its logical conclusion the empirical philosophy of Locke and Berkeley, and by making it self-consistent made it incredible. He represents, in a certain sense, a dead end: in his direction, it is impossible to go further.[4]

In important ways, philosophy in the English-speaking world has been “after-Hume” in accord with these perspectives on his work. And in important ways these sorts of perspectives exacerbate a tendency that has been building since Bacon and well before: philosophy begins in wonder, but when certain ideologies are adopted, what it is acceptable to wonder about is categorised, and other types of enquiry outside these categories are “sequestered”—sometimes as scientific disciplines, sometimes as theology, sometimes as “nonsense”. The empirical philosophies of the 17th and 18th centuries set new bounds for what could be “seriously” considered—the doctrine that all knowledge is ultimately based on sense-experience placed religion and metaphysics outside the realm of the knowable, and thus out of the realm of philosophy. Reduction to the material oversimplifies and distorts. And the analytic tradition has imbibed, to near-intoxication, the caveat on emotions that goes back at least to Plato. Part of Martha Nussbaum’s thesis in Love’s Knowledge is that it is only via literature that certain truths are apprehensible, because it is only literature that can explore some of their depths—“powerful emotions have an irreducibly important cognitive role to play in communicating certain truths”.[5]

The “style” of the modern analytic tradition was set by Locke’s tone: it powerfully conveyed the belief …

…that the truths the philosopher had to tell are such that the plain clear general non-narrative style most generally found in philosophical articles and treatises is in fact the style best suited to state any and all of them.[6]

“Snow is white” is true if and only if snow is white is an honest attempt to encapsulate something of the “essence” of truth in a proposition that the analytic tradition finds unexceptionable.   It doesn’t profess to “reconnect us to higher possibilities”.   As a most obvious, uncontroverted and uncomplicated thing that can be said about the meaning of “truth”, it has about as much resonance as a mission statement. However, it is an end-product of generations of struggle to dispose of the unwieldy mass of superstition, metaphysical confusion and unwarranted conflation of related concepts that philosophical discourse on truth, under analysis, proves to be. Outsiders bemusedly ask what went wrong – how could such labour produce such seemingly insignificant results?

The answer lies in the ideology of the modern analytic tradition, which has developed from the empiricist mindset that defined what was and was not part of philosophy, whilst incorporating only selected features of older traditions.   This sequestration has simplified philosophy within the analytic tradition. Read a certain way, David Hume can be seen as doing the most thoroughgoing hatchet-job: his “consign it to the flames” at the end of the Enquiry became a manifesto for many. But what we have in the intellectual milieu of the C21st is a bastardisation of Hume: Russell’s idea—of a “self-refutation of rationality” derived from Hume—has fed the sceptical/relativist postmodern zeitgeist, while Hume’s perceived “extreme” empiricism has fed into the analytic tradition in ways that have spawned behaviourism and radical scientistic materialism.

I believe that Hume was better than this. Might I, as “diffidently” as Hume might, suggest another perspective to cast long-overdue light on his (overlooked) contribution? Russell’s “self-refutation of rationality” was a step too far: the scientistic/materialist mindset has an ideological propensity to view “rationality” as the logical 1-2-3 analytic sort of thinking that is part of rationality, but perhaps, as A C Graham says…

Logicality itself is only one of the varieties, and not necessarily the most important for judging someone intelligent.   Reason in the narrow sense can presume too much on being the capacity which distinguishes human from animal; an exclusively logical mind, if such is conceivable, would be less than animal, logical operations being the human activity most easily duplicated by a computer.[7]

This embedded hyper-rationalism within the basically empiricist tradition has had far-reaching ramifications for the modern mind: Hume’s uncoupling of cause/effect, is/ought and the “necessary connexion between any two ideas whatsoever” either does or does not put paid to “rationality”, as Russell so melodramatically asserted. If rationality is conceived as reason in the “narrow sense” that Graham alludes to above, then Hume does mark an end point, and modern philosophy since should have descended into Pyrrhonian scepticism or moved in a completely different direction. But it did neither. Yes, there are extreme relativistic and sceptical camps out there, and yes, there have been attempts, often via “French theory”, to branch out in new directions… but to various dead ends. And yes, before that we had the “linguistic turn” that sought to make meaning out of language, and yes, we have had logical positivism, reductive materialism and many other isms, but Hume’s actual nailing of what human rationality was all about at a basic level has not had the effect it should have had on the way we think today.

Hume’s important insight, as flagged earlier, was not to call rationality into question, but to show that using (exclusively) the small part of it that is narrow, analytical reasoning to “argue for” any certain conclusion at all—God, causality, induction, the boiling point of water—cannot deliver any “proof” that can be demonstrated. Simone Weil, in Gravity and Grace, wrote: “The intelligence has nothing to discover, it has only to clear the ground. It is only good for servile tasks.” But to jump from this sort of insight to a belief in the impossibility of meaning/knowledge/truth requires an “ideological irruption”. Note, the intelligence is good, albeit for servile tasks. To write off the intelligence as worthless because it cannot guarantee the conclusions it reaches is tantamount to an all-or-nothing deductivist scepticism that writes off any incomplete, less-than-perfect train of thought that doesn’t converge on an “entailment”. It is using a type of analysis that is only a part of thought to stand in for all of thought.

Hume was no sceptic (capital S): he understood scepticism to be a useful tool, but a barren religion:

The chief and most confounding objection to excessive scepticism is that no durable good can ever result from it; while it remains in its full force and vigour.[8]

And he also recognised, with that last clause “[while] it remains in its full force and vigour”, the emotional content of our motivations to accept or reject so-called objective reasonings.

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.[9]

This chimes with what A C Graham says in “Poetic and Mythic Varieties of Correlative thinking”—there is “something about objective knowledge that obstructs a subjective recognition that our choices are between directions in which we are being moved by conflicting forces from outside ourselves” (my italics) and “The analytic remains imprisoned in itself, seems to start from itself, forgetting its dependence on spontaneous correlation for its concepts and on spontaneous motivations for its prescriptions.”[10]

It seems that we live in a time like the Enlightenment, like C5th Greece, in an onrush of potentially cataclysmic change—foundations are being challenged; the very ways that foundations can be challenged are being challenged; fundamental concepts like truth are under attack—and our ability to cope, to take control, to find a way through, has been fatally compromised by the new great schism, between reason and unreason, that we see everywhere, and which is in part a legacy of how the “Enlightenment” has been interpreted. The current polarisation, of the hyper-rational techno-elite world of Google, Amazon, et. al., and the moiling of hot, primitive emotions on the world wide web, is one end result of a particular focus on reason—not the beneficent light of reason that seeks to encompass all things, but the spotlight of a reason confined within its own limits that can see its own progress only as an unmixed good. This hubris provokes the equal and opposite reaction of populism; distrust in experts; blind rage at a regime that does not take into account the crucial parts of being human that are not measurable, monetise-able, or reducible to propositions that can be manipulated; and, ultimately, to bloody revolution.

A judicious mix of Nietzschean Perspectivism and Humean reason might afford us the best means of making sense of the legacy of the Enlightenment.

Tom McWilliam, September 2018







[1] Part of a sample essay for students to consider when they are “researching” the period:

[2] Hume, D. A Treatise of Human Nature, Book I, Fontana 1962, 11-12)

[3] [With apologies to David Fate Norton. ]

[4] Russell, B. A History of Western Philosophy, Allen & Unwin, 1947: 685

[5] Nussbaum, M.  Love’s Knowledge,  1990, 7

[6] Ibid, 8

[7] Graham, A. C. Unreason Within Reason: essays on the outskirts of rationality, Open Court, Illinois, 1992,

[8] Hume, D. An Enquiry Concerning Human Understanding, OUP, 1980 12, 23

[9] Hume, D. A Treatise of Human Nature Bk 2, Pt III, sect III, OUP 1960, 415

[10] Graham Op. Cit. 221