The Legacy of Enlightenment Philosophy

When we think of “Hipster café culture” we can be dismissive of something easily caricatured as vacuous, narcissistic and intellectually not up to much. What a contrast, we might think, with the coffee-houses, salons and public meeting-places of the C18th. The intellectual ferment of the “Century of philosophy”, we feel, was something qualitatively different from the life of the mind in the C21st. And yet we are told by many that The Enlightenment ushered in the early modern world; The Enlightenment, in the words of a year 11 essay on the subject,

…was a way of thinking that focused on the betterment of humanity by using logic and reason rather than irrationality and superstition. It was a way of thinking that showed scepticism in the face of religion, challenged the inequality between the kings and their people, and tried to establish a sound system of ethics.[1]

But “The Enlightenment”, scare quotes and all, is perceived as a contested site by the modern academy. Before expanding on this, let us consider the enlightenment period and its legacies from a variety of different perspectives. Nietzsche always looked for the additional, wider perspective that would show his as inadequate, but he didn’t dissolve into relativist despair at the prospect—this constant re-questioning led not to scepticism but to a strengthening and enriching of knowledge. We might do well to follow his example.

From one perspective, perhaps the broadest one available, “enlightenment”, “seeing the light” and the “light of reason” have always been associated with progress towards a clearer grasp of reality and truth. Labelling one short period in history “The Enlightenment” may seem offensively self-congratulatory and serve to negatively characterise other periods, like the “Dark Ages” before it, as unenlightened. The tradition of “Wisdom” (the demeaning inverted commas conferred only in the C20th), or the Perennial Philosophy, has long held that certain truths about reality have always been available to humans, and that various traditions, most often religious ones, have approached these same truths via different paths. Perennialism suggests that, rather than a syncretic amalgamation of these traditions in a New-Age pot-pourri, what is essential in all of them can be seen—on a higher level—as a way to true enlightenment.

A radically different way of viewing the intellectual ferment of the late C17th and the C18th is the “Enlightenment-bashing” postmodern characterisation of it as just a flimsy intellectual cover for Europe’s aggressive colonialization of most of the rest of the world: reason’s claim to universality and bringing the light to the benighted is nicely exposed in a Cook cartoon of 30 or more years ago, with Malcolm Fraser, apparently struggling under a huge box labelled “The White Man’s Burden”, waving off a black man offering to help him carry it with, “There’s nothing in it”. Since then, of course, that “burden” has been read as the burden of oppression, one that imposes an obliterating white “reason” on the kaleidoscopic variety of other cultures, previously seen as benighted.

“Enlightenment values”—if we take the clichéd ones in the Year-11 essay above—are like Motherhood statements for most people in today’s world. Who would cavil with “reason” as against “superstition”, or social reform that sought to reduce inequalities? A standard characterisation of The Enlightenment (capital ‘T’, capital ‘E’) has it that what this century of ferment brought into being was akin to a maturing of humanity, a time when we threw off shackles of tradition and superstition and started to think for ourselves as a species. Certainly it was a time of radical change: the scientific “revolution” of the centuries before, which threw into turmoil most of the received ideas about how the universe formed and functioned, and caused huge reaction and major re-thinking on the part of religious philosophers whilst inviting sceptical speculation about the accepted order, was one cause of “disruption”. Another disrupting factor was the “age of exploration” from mid-C15th to the C17th, which by bombarding Europe with new discoveries considerably widened the “possibility field” and legitimated thinking outside the usual parameters. It has been noted that the two periods which have contributed most to Western thought—C5th and C4th BCE Athens and C17th and C18th CE Europe —have been periods of upheaval and radical change in the circumstances of their respective populations. It could be that we are in another such period, and have much to learn from the Greeks and from the Enlightenment…

So, to look closer at this “Enlightenment”—examining some typical statements of major Enlightenment thinkers might allow us to discern certain themes and emphases.

In France, Voltaire:

“Prejudices are what fools use for reason”

“Those who can make you believe absurdities can make you commit atrocities”

“The truths of religion are never so well understood as by those who have lost their reason”

…and Montesquieu:

“The tyranny of a prince in an oligarchy is not so dangerous to the public welfare as the apathy of a citizen in a democracy”

“To become truly great, one has to stand with people, not above them”


“Man is born free and everywhere he is in chains”

“Conscience is the voice of the soul; the passions are the voice of the body.”


“Man will never be free until the last king is strangled with the entrails of the last priest”

“Only passions, great passions can elevate the soul to great things”

One might be forgiven for thinking an anti-authority, anti-clerical temper pervaded the French enlightenment. It is only when we probe a little deeper than the caricature that quite fundamental differences in outlook can be discerned. Nietzsche, whose aphoristic and rhetorical work might be seen as “uneven”, could nevertheless come up with the sharpest insights.  He spoke of the “serenely elitist Voltaire” and the “enviously plebeian Rousseau”, saying Voltaire was an “unequivocal top-down moderniser” whereas Rousseau saw that the “Enlightenment project of willed, abstract social reform” could “cause deracination, self-hatred and vindictive rage”. Perhaps in the light of the Terror after the French Revolution we could accede to Nietzsche’s observation, whether Rousseau grasped all the implications or not.

And in England, the “Enlightenment” took a slightly different course: what was an anti-authority ferment in France played out differently in England’s constitutional monarchy (Voltaire was “exiled” to England for a time, and took on many of Locke’s empiricist ideas). There was a widely-held view, in what has become the “broadly analytic tradition” in modern Anglo-American philosophy, that the most important thing that was happening in England from the early C17th through the C18th was the gathering ascendancy of “British Empiricism” as a world-view.

The 17th and 18th century development of the British Empiricist School was closely paralleled by the exponential growth of the experimental sciences and their discovery of an identity distinct from pure mathematics.   Russell’s notion of a “reciprocal causation”, between the circumstances people live in and what they think, helps explain this synergistic flourishing.   But, of course, the empiricist “attitude” is not new with the birth of modern science: a curiosity giving scope to observation and experimentation has always been at least a small part of being human and has been responsible for enormous changes in what we know of the world.

An important perspective on the Enlightenment period sees it through the lens of the “Rationalist/Empiricist” dichotomy. The simplistic “England was Empiricist and the Continent was Rationalist” is just that: simplistic and virtually useless as an explanation of what was going on. David Hume, whom we will get back to later as one of the most “reasonable” voices of the Enlightenment, said that the central philosophical debate of his day was waged between “speculative atheists” and “religious philosophers”. In the England of the late C17th it was possible to offer a publicly persuasive “confutation of atheism” – the title of the first series of Boyle Lectures, by Richard Bentley in 1692, which gave natural theology a prominent place in intellectual discourse. The lectures delivered over the period 1692-1732 were widely regarded as the most significant public demonstration of the “reasonableness” of Christianity in the early modern period, characterised by that era’s growing emphasis upon rationalism and its increasing suspicion of ecclesiastical authority. Significant empiricist philosophers like Locke and Berkeley, and scientists of the stature of Newton, were advocates of some of the then current (rationalist) arguments that sought to prove the existence of god. And on the continent, quite radical empiricist voices punctuated the generally rationalist tone: Voltaire, as mentioned above, was greatly influenced by Locke’s empiricism, adopting his refutation of innate ideas to criticise Descartes, and making fun of the famous rationalist philosopher Leibniz in Candide; Condillac took Locke’s empiricism to a much more extreme position in his Traité des sensations, and de la Mettrie the extreme materialist wrote Man a Machine. On the other hand, J S Bach, who flourished in the first half of the C18th, might be read as presenting a mathematically precise musical argument for faith arrived at by reason.

It seems that once we characterise an era by its prevailing ideology, we are drawn to view the entire period—with its seething mass of conflicting ideas, interests, personalities, politics and events—as best understood from that perspective alone. Once embarked on a theme, our thoughts move effortlessly among ideas marked by similarity and contiguity. Confirmation bias lights the way through deeper and more thorough research in the construction of a forming thesis. David Hume pointed out this “habit of mind”, and it was, I believe, one of his most important insights. But its importance was also the most overlooked. To understand why this was so it will be useful to survey several perspectives on Hume, which stridently assert different versions of Hume’s place in the “Enlightenment” milieu. Firstly, the “British Empiricism” perspective—Hume was third in the triumvirate of Locke/Berkeley/Hume.

He is often described as representing the “dead end” of empirical philosophy, of arriving at such “shocking conclusions” that philosophy has been reeling ever since. Bertrand Russell talks of the “self-refutation of rationality”, sees Hume’s scepticism as being “inescapable for an empiricist”, and expresses the “hope that something less sceptical than Hume’s system may be discoverable.” Hume took the empiricist project to its logical conclusion. What we can know for certain by the application of empirical principles is strictly limited. Locke said as much, Berkeley halved what Locke believed we could know, and Hume put paid to most of the rest. There is at least a surface parallel with Socrates’ insistence on human ignorance needing to be understood before knowledge is possible, but there seems to be a difference in what is possible after this “extent of human ignorance” is grasped.

The consequences of Hume’s philosophy are no less than the death of all rationalistic metaphysics and ethics, the acceptance of a purely descriptive role for natural science, and the inclusion of human thought and action as natural processes within the province of biology and psychology.[2]

And how did he do this?   Ostensibly by the application of empirical methods. The “British Empiricists” are seen as primarily concerned to provide an account of the philosophical foundations of human knowledge in general, and of modern science in particular. There is a definite ideological overreach in Macnabb’s summation of Hume’s legacy, but the same ideology forms an important strand running through the broadly analytic “tradition” that is still the dominant philosophical milieu in the Anglosphere, postmodern relativist incursions notwithstanding. The gospel of British Empiricism has been parodied along the lines of:

“Let there be light!” and there was light, and He called it “renaissance”, but saw that there was still darkness, so He took a rib of the renaissance with which to make greater light. But the rib broke, and there arose two false lights, one Bacon, meaning “Father of the British Empiricists” and one Descartes, meaning “Father of the Continental Rationalists”. And the Creator saw that they should war, so he divided them by a great gulf, until there should arise in the east a great philosopher who shall be unlike them and yet like them, who will bring true light and unite them. And thus it was that Bacon begat Hobbes, and Hobbes begat Locke, and Locke begat Berkeley, and Berkeley begat Hume. And thus it was that Descartes begat Spinoza, and Spinoza begat Leibniz, and Leibniz begat Wolff. And then it was that there arose the great sage of Konigsberg, the great ImmanueI, Immanuel Kant, who, though neither empiricist nor rationalist, was like unto both. He it was who combined the eye of the scientist with the mind of the mathematician. And this too the creator saw, and he saw that it was good, and he sent goodly men and scholars true to tell the story wherever men should henceforth gather to speak of sages past.[3]

And of course this history of early modern philosophy has been called into serious question, but it still has explanatory power, and leaves a void to fill if it is rejected outright.

Another perspective claims to fill that void. Rather than seeing Hume as the apotheosis of British Empiricism, we should situate him in his historical, social and political context. If, as he suggested, the main philosophical debates of the time were between “Religious Philosophers” and “Speculative Atheists”, it would be fair to assume that Hume had a position on this debate. And we do encounter religion in the bulk of his philosophical writings. The perspective that Hume’s was a “philosophy of irreligion” suffers from the usual pejorative connotations of that term. The Australian OED gives “indifference or hostility to religion”, and a moment’s thought shows that this is a strong disjunction: you cannot be indifferent and hostile at the same time. The proponents of the “irreligious” perspective seem on close reading to equivocate between the two denotations, or if they make a definite case for “indifferent” they allow the pejorative connotations around that word free rein, perhaps occluding the standard “having no partiality for or against”. Hume was hostile to the robust theism in the major religions, especially Christianity, because he saw that people believed it to legitimise the various atrocities that have been associated with religious wars, crusades and the like; however, he never espoused the aggressive and pugnacious atheism of someone like, say, Dawkins in our era. Hume apparently once said to Baron d’Holbach, “I’ve never even met an atheist”. Like the “British Empiricism” perspective, the “Irreligious” one is useful but partial.

A further way of viewing Hume is that throughout the twentieth century and up to the present time Hume’s philosophy has generally been understood in terms of two core themes, scepticism and naturalism. The obvious difficulty is how these two themes are related to each other and which one represents the “real” Hume. How to reconcile Hume’s radical scepticism with his efforts to advance a “science of man”—a tension that pervades Hume’s entire philosophy and is most apparent in his Treatise, has exercised many good minds. It has given rise to technical arguments about sets and sub-sets of Scepticism; arguments about whether Hume actually believed that the Pyrrhonian end-point was unavoidable; questions about how a “moderate” scepticism could be advanced after an “obvious” acknowledgement of Pyrrhonianism; Hume as pursuing an essentially destructive or negative philosophical program, the principal aim of which is to show that our “common sense beliefs” (e.g. in causality, the external world, the self, and so on) lack any foundation in reason and cannot be justified. This sceptical reading of Hume’s philosophy dates back to its early reception, especially by two of Hume’s most influential Scottish critics, Thomas Reid and James Beattie. Viewed this way, Hume’s reputation is well summed-up by Bertrand Russell:

David Hume is one of the most important among philosophers, because he developed to its logical conclusion the empirical philosophy of Locke and Berkeley, and by making it self-consistent made it incredible. He represents, in a certain sense, a dead end: in his direction, it is impossible to go further.[4]

In important ways, philosophy in the English-speaking world has been “after-Hume” in accord with these perspectives on his work. And in important ways these sorts of perspectives exacerbate a tendency that has been building since Bacon and well before: philosophy begins in wonder, but when certain ideologies are adopted, what it is acceptable to wonder about is categorised, and other types of enquiry outside these categories are “sequestered”—sometimes as scientific disciplines, sometimes as theology, sometimes as “nonsense”. The empirical philosophies of the 17th and 18th centuries set new bounds for what could be “seriously” considered—the doctrine that all knowledge is ultimately based on sense-experience placed religion and metaphysics outside the realm of the knowable, and thus out of the realm of philosophy. Reduction to the material oversimplifies and distorts. And the analytic tradition has imbibed, to near-intoxication, the caveat on emotions that goes back at least to Plato. Part of Martha Nussbaum’s thesis in Love’s Knowledge is that it is only via literature that certain truths are apprehensible, because it is only literature that can explore some of their depths—“powerful emotions have an irreducibly important cognitive role to play in communicating certain truths”.[5]

The “style” of the modern analytic tradition was set by Locke’s tone: it powerfully conveyed the belief …

…that the truths the philosopher had to tell are such that the plain clear general non-narrative style most generally found in philosophical articles and treatises is in fact the style best suited to state any and all of them.[6]

“Snow is white” is true if and only if snow is white is an honest attempt to encapsulate something of the “essence” of truth in a proposition that the analytic tradition finds unexceptionable.   It doesn’t profess to “reconnect us to higher possibilities”.   As a most obvious, uncontroverted and uncomplicated thing that can be said about the meaning of “truth”, it has about as much resonance as a mission statement. However, it is an end-product of generations of struggle to dispose of the unwieldy mass of superstition, metaphysical confusion and unwarranted conflation of related concepts that philosophical discourse on truth, under analysis, proves to be. Outsiders bemusedly ask what went wrong – how could such labour produce such seemingly insignificant results?

The answer lies in the ideology of the modern analytic tradition, which has developed from the empiricist mindset that defined what was and was not part of philosophy, whilst incorporating only selected features of older traditions.   This sequestration has simplified philosophy within the analytic tradition. Read a certain way, David Hume can be seen as doing the most thoroughgoing hatchet-job: his “consign it to the flames” at the end of the Enquiry became a manifesto for many. But what we have in the intellectual milieu of the C21st is a bastardisation of Hume: Russell’s idea—of a “self-refutation of rationality” derived from Hume—has fed the sceptical/relativist postmodern zeitgeist, while Hume’s perceived “extreme” empiricism has fed into the analytic tradition in ways that have spawned behaviourism and radical scientistic materialism.

I believe that Hume was better than this. Might I, as “diffidently” as Hume might, suggest another perspective to cast long-overdue light on his (overlooked) contribution? Russell’s “self-refutation of rationality” was a step too far: the scientistic/materialist mindset has an ideological propensity to view “rationality” as the logical 1-2-3 analytic sort of thinking that is part of rationality, but perhaps, as A C Graham says…

Logicality itself is only one of the varieties, and not necessarily the most important for judging someone intelligent.   Reason in the narrow sense can presume too much on being the capacity which distinguishes human from animal; an exclusively logical mind, if such is conceivable, would be less than animal, logical operations being the human activity most easily duplicated by a computer.[7]

This embedded hyper-rationalism within the basically empiricist tradition has had far-reaching ramifications for the modern mind: Hume’s uncoupling of cause/effect, is/ought and the “necessary connexion between any two ideas whatsoever” either does or does not put paid to “rationality”, as Russell so melodramatically asserted. If rationality is conceived as reason in the “narrow sense” that Graham alludes to above, then Hume does mark an end point, and modern philosophy since should have descended into Pyrrhonian scepticism or moved in a completely different direction. But it did neither. Yes, there are extreme relativistic and sceptical camps out there, and yes, there have been attempts, often via “French theory”, to branch out in new directions… but to various dead ends. And yes, before that we had the “linguistic turn” that sought to make meaning out of language, and yes, we have had logical positivism, reductive materialism and many other isms, but Hume’s actual nailing of what human rationality was all about at a basic level has not had the effect it should have had on the way we think today.

Hume’s important insight, as flagged earlier, was not to call rationality into question, but to show that using (exclusively) the small part of it that is narrow, analytical reasoning to “argue for” any certain conclusion at all—God, causality, induction, the boiling point of water—cannot deliver any “proof” that can be demonstrated. Simone Weil, in Gravity and Grace, wrote: “The intelligence has nothing to discover, it has only to clear the ground. It is only good for servile tasks.” But to jump from this sort of insight to a belief in the impossibility of meaning/knowledge/truth requires an “ideological irruption”. Note, the intelligence is good, albeit for servile tasks. To write off the intelligence as worthless because it cannot guarantee the conclusions it reaches is tantamount to an all-or-nothing deductivist scepticism that writes off any incomplete, less-than-perfect train of thought that doesn’t converge on an “entailment”. It is using a type of analysis that is only a part of thought to stand in for all of thought.

Hume was no sceptic (capital S): he understood scepticism to be a useful tool, but a barren religion:

The chief and most confounding objection to excessive scepticism is that no durable good can ever result from it; while it remains in its full force and vigour.[8]

And he also recognised, with that last clause “[while] it remains in its full force and vigour”, the emotional content of our motivations to accept or reject so-called objective reasonings.

Reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.[9]

This chimes with what A C Graham says in “Poetic and Mythic Varieties of Correlative thinking”—there is “something about objective knowledge that obstructs a subjective recognition that our choices are between directions in which we are being moved by conflicting forces from outside ourselves” (my italics) and “The analytic remains imprisoned in itself, seems to start from itself, forgetting its dependence on spontaneous correlation for its concepts and on spontaneous motivations for its prescriptions.”[10]

It seems that we live in a time like the Enlightenment, like C5th Greece, in an onrush of potentially cataclysmic change—foundations are being challenged; the very ways that foundations can be challenged are being challenged; fundamental concepts like truth are under attack—and our ability to cope, to take control, to find a way through, has been fatally compromised by the new great schism, between reason and unreason, that we see everywhere, and which is in part a legacy of how the “Enlightenment” has been interpreted. The current polarisation, of the hyper-rational techno-elite world of Google, Amazon, et. al., and the moiling of hot, primitive emotions on the world wide web, is one end result of a particular focus on reason—not the beneficent light of reason that seeks to encompass all things, but the spotlight of a reason confined within its own limits that can see its own progress only as an unmixed good. This hubris provokes the equal and opposite reaction of populism; distrust in experts; blind rage at a regime that does not take into account the crucial parts of being human that are not measurable, monetise-able, or reducible to propositions that can be manipulated; and, ultimately, to bloody revolution.

A judicious mix of Nietzschean Perspectivism and Humean reason might afford us the best means of making sense of the legacy of the Enlightenment.

Tom McWilliam, September 2018







[1] Part of a sample essay for students to consider when they are “researching” the period:

[2] Hume, D. A Treatise of Human Nature, Book I, Fontana 1962, 11-12)

[3] [With apologies to David Fate Norton. ]

[4] Russell, B. A History of Western Philosophy, Allen & Unwin, 1947: 685

[5] Nussbaum, M.  Love’s Knowledge,  1990, 7

[6] Ibid, 8

[7] Graham, A. C. Unreason Within Reason: essays on the outskirts of rationality, Open Court, Illinois, 1992,

[8] Hume, D. An Enquiry Concerning Human Understanding, OUP, 1980 12, 23

[9] Hume, D. A Treatise of Human Nature Bk 2, Pt III, sect III, OUP 1960, 415

[10] Graham Op. Cit. 221

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.