Introduction: Open Utopia

Download this Section »

“Today we are people who know better, and that’s both a wonderful and terrible thing.”

– Sam Green, Utopia in Four Movements,

Utopia is a hard sell in the twenty-first century. Today we are people who know better, and what we know are the horrors of “actually existing” Utopias of the previous century: Nazi Germany, Stalin’s Soviet Union, Maoist China, and so on in depressing repetition.1 In each case there was a radical break with the present and a bold leap toward an imagined future; in every case the result was disastrous in terms of human cost. Thankfully, what seems to be equally consistent is that these Utopias were relatively short-lived. History, therefore, appears to prove two things: one, Utopias, once politically realized, are staggering in their brutality; and two, they are destined to fail. Not exactly a ringing endorsement.

Yet we need Utopia more than ever. We live in a time without alternatives, at “the end of history” as Frances Fukuyama would have it, when neoliberal capitalism reins triumphant and uncontested.2 There are still aberrations: radical Islam in the East, neo-fascist xenophobia in the West, and a smattering of socialist societies struggling around the globe, but by and large the only game in town is the global free market. In itself this might not be so bad, except for the increasingly obvious fact that the system is not working, not for most people and not most of the time. Income inequality has increased dramatically both between and within nations. National autonomy has become subservient to the imperatives of global economic institutions, and federal, state, and local governance are undermined by the protected power of money. Profit-driven industrialization and the headlong rush toward universal consumerism is hastening the ecological destruction of the planet. In short: the world is a mess. Opinion polls, street protests, and volatile voting patterns demonstrate widespread dissatisfaction with the current system, but the popular response so far has largely been limited to the angry outcry of No! No to dictators, No to corruption, No to finance capital, No to the one percent who control everything. But negation, by itself, affects nothing. The dominant system dominates not because people agree with it; it rules because we are convinced there is no alternative.

Utopia offers us a glimpse of an alternative. Utopia, broadly conceived, is an image of a world not yet in existence that is different from and better than the world we inhabit now. For the revolutionary, Utopia offers a goal to reach and a vision to be realized. For the reformer, it provides a compass point to determine what direction to move toward and a measuring stick to determine how far one has come. Utopia is politically necessary even for those who do not desire an alternative society at all. Thoughtful politics depend upon debate and without someone or something to disagree with there is no meaningful dialogue, only an echo chamber. Utopia offers this “other,” an interlocutor with which to argue, thereby clarifying and strengthening your own ideas and ideals (even if they lead to the conclusion that Utopia is undesirable). Without a vision of an alternative future, we can only look backwards nostalgically to the past, or unthinkingly maintain what we have, mired in the unholy apocalypse that is now. Politically, we need Utopia.

Yet there are theoretical as well as practical problems with the project. Even before the disastrous realizations of Utopia in the twentieth century, the notion of an idealized society was attacked by both radicals and conservatives. From the Left, Karl Marx and Frederick Engels famously criticized Utopians for ignoring the material conditions of the present in favor of fantasies of a future–an approach, in their estimation, that was bound to result in ungrounded and ineffectual political programs, a reactionary retreat to an idealized past, and to inevitable failure and political disenchantment. “Ultimately,” they wrote in The Communist Manifesto, “when stubborn facts had dispersed all intoxicating effects of self-deception, this form of socialism end[s] in a miserable fit of the blues.”3 That is to say, the high of Utopia leads, inevitably, to the crushing low of a hangover. From the Right, Edmund Burke disparaged the Utopianism of the French Revolution for refusing to take into account the realities of human nature and the accumulated wisdom of long-seated traditions. With some justification, Burke felt that such leaps into the unknown could only lead to chaos and barbarism.4 Diametrically opposed in nearly every other facet of political ideology, these lions of the Left and Right could agree on one thing: Utopia was a bad idea.

Between the two poles of the political spectrum, for those in the center who simply hold on to the ideal of democracy, Utopia can also be problematic. Democracy is a system in which ordinary people determine, directly or through representation, the system that governs the society they live within. Utopias, however, are usually the products of singular imaginations or, at best, the plans of a small group: a political vanguard or artistic avant-garde. Utopians too often consider people as organic material to be shaped, not as willful agents who do the shaping; the role of the populace is, at best, to conform to a plan of a world already delivered complete. Considered a different way, Utopia is a closed program in which action is circumscribed by an algorithm coded by the master programmer. In this program there is no space for the citizen hacker. This is one reason why large-scale Utopias, made manifest, are so horrific and short-lived: short-lived because people tend not to be so pliable, and therefore insist on upsetting the perfect plans for living; horrific because people are made pliable and forced to fit the plans made for them.5 In Utopia the demos is designed, not consulted.

It is precisely the imaginative quality of Utopia–that is, the singular dream of a phantasmagorical alternative–that seems to damn the project to naïve impracticality as an ideal and megalomaniac brutality in its realization. But without political illusions, with what are we left? Disillusion, and its attendant discursive practice: criticism.6 Earnest, ironic, sly or bombastic; analytic, artistic, textual, or performative; criticism has become the predominant political practice of intellectuals, artists, and even activists who are dissatisfied with the world of the present, and ostensibly desire something new. Criticism is also Utopia’s antithesis. If Utopianism is the act of imagining what is new, criticism, derived from the Greek words kritikos (to judge) and perhaps more revealing, krinein (to separate or divide), is the practice of pulling apart, examining, and judging that which already exists.

One of the political advantages of criticism–and one of the reasons why it has become the preferred mode of political discourse in the wake of twentieth-century Utopian totalitarianism–is that it guards against the monstrous horrors of political idealism put into practice. If Utopianism is about sweeping plans, criticism is about pointed objections. The act of criticism continually undermines any attempt to project a perfect system. Indeed, the very act of criticism is a strike against perfection: implicitly, it insists that there is always more to be done. Criticism also asks for input from others. It presupposes a dialogue between the critic and who or what they are criticizing–or,ideally, a conversation amongst many people, each with their own opinion. And because the need to criticize is never-ending (one can always criticize the criticism itself), politics remains fluid and open: a permanent revolution. This idea and ideal of an endless critical conversation is at the center of democratic politics, for once the conversation stops we are left with a monolithic ideal, and the only politics that is left is policing: ensuring obedience and drawing the lines between those who are part of the brave new world and those who are not. 7 This “policing” is the essence of totalitarianism, and over the last century the good fight against systems of oppression, be they fascist, communist or capitalist, has been waged with ruthless criticism.

But criticism has run its political course. What was once a potent weapon against totalitarianism has become an empty ritual, ineffectual at best and self-delusional at worst. What happened? History. The power of criticism is based on two assumptions: first, that there is an intrinsic power and worth in knowing or revealing the Truth; and second, that in order to reveal the Truth, belief–often based in superstition, propaganda, and lies–must be debunked. Both these assumptions, however, have been undermined by recent material and ideological changes.

The idea that there is a power in knowing the Truth is an old one. As the Bible tells us in the Gospel of John (8:31-33) “And ye shall know the truth, and the truth shall make you free.”8 What constituted the “truth” at that time was hardly the empirical fact of today–it was what we might call the supreme imaginary of the Word of God, communicated through the teachings of Jesus Christ. Nonetheless, these are the seeds of an idea and ideal that knowing the answer to life’s mysteries is an intrinsic good. As I have argued elsewhere, 9 this faith in the power of the Truth is integral to all modern political thought and liberal-democratic politics, but it is given one of its purest popular expressions in Hans Christian Anderson’s 1837 tale “The Emperor’s New Clothes.” The story, as you may recall from your childhood, is about an emperor who is tricked into buying a spectacular suit of non-existent clothing by a pair of charlatans posing as tailors. Eager to show it off, the Emperor parades through town in the buff as the crowd admires his imaginary attire. Then, from the sidelines, a young boy cries out: “But he has nothing on,” and, upon hearing this undeniable fact, the people whisper it mouth to ear, awaken from their illusion, and live happily ever after. Is this not the primal fantasy of all critics–that if they just revealed the Truth, the scales will fall from people’s eyes and all will see the world as it really is? (Which, of course, is the world as the critic sees it.)

There was once a certain logic to this faith in the power of the possession of Truth–or, through criticism, the revealing of a lie. Within an information economy where there is a scarcity of knowledge, and often a monopoly on its production and distribution, knowledge does equal power. To criticize the official Truth was to strike a blow at the church or state’s monopoly over meaning. Critique was a decidedly political act, and the amount of effort spent by church and state in acts of censorship suggests its political efficacy. But we do not exist in this world anymore. We live in what philosopher Jean-François Lyotard named “the postmodern condition,” marked by the “death of the master narrative” in which Truth (or the not so Noble Lie) no longer speaks in one voice or resides in one location.10

The postmodern condition, once merely an academic hypothesis pondered by an intellectual elite, is now, in the Internet age, the lived experience of the multitude. On any social or political issue there are hundreds, thousands and even millions of truths being claimed. There are currently 1 trillion unique URLs on the World Wide Web, accessed by 2 billion Google searches a day. There are more than 70 million videos posted on YouTube, and about 30 billion tweets have been sent. The worldwide count of blogs alone exceeds 130 million, each with a personalized perspective and most making idiosyncratic claims.11 Even the great modern gatekeepers of the Truth–BBC, CNN and other “objective” news outlets–have been forced to include user-generated content and comment boards on their sites, with the result that no singular fact or opinion stands alone or remains unchallenged.

It was the great Enlightenment invention of the Encyclopedia that democratized Truth–but only in relation to its reception. Wikipedia, the online encyclopedia with its 3.5 million-and-counting entries in English alone has democratized the production of truths.12 This process is not something hidden, but part of the presentation itself. Each Wikipedia page is headed by a series of tabs that, when clicked, display the encyclopedia entry, public discussion about the definition provided, the history of the entry’s production, and a final tab: “edit this page,” where a reader has the chance to become a (co)producer of knowledge by editing and rewriting the original entry. In Wikipedia the Truth is transformed from something that is into something that is becoming: built, transformed, and revised; never stable and always fluid: truth with a small “t.”

Today’s informational economy is no longer one of monopoly or scarcity–it is an abundance of truth…and of critique. When power is wielded through a monopoly on Truth, then a critical assault makes a certain political sense, but singularity has now been replaced by plurality. There is no longer a communications citadel to be attacked and silenced, only an endless plain of chatter, and the idea of criticizing a solitary Truth, or swapping one for the other–the Emperor wears clothes/the Emperor wears no clothes–has become increasingly meaningless. As the objects of criticism multiply, criticism’s power and effect directly diminishes.

Criticism is also contingent upon belief. We often think of belief as that which is immune to critique. It is the individual or group that is absolutely confident–religious fundamentalists in today’s world, or totalitarian communists or fascists of the last century; that is, those who possess what we call blind belief, which criticism can not touch. This is not so, for it is only for those who truly believe that criticism still matters. Criticism threatens to undermine the very foundation of existence for those who build their lives on the edifice of belief. To question, and thus entertain doubt, undermines the certainty necessary for thoroughgoing belief. This is why those with such fervent beliefs are so hell-bent on suppressing their critics.

But can one say, in most of the world today, that anyone consciously believes in “the system”? Look, for instance, at the citizens of the United States and their opinions about their economic system. In 2009, the major US pollster Rasmussen Reports stated that only a marginal majority of Americans – 53 percent – believe that capitalism is a better system than Socialism.13 This finding was mirrored by a poll conducted a year later by the widely respected Pew Research Center for the People and the Press, in which only 52 percent of Americans expressed a favorable opinion of capitalism.14 Just a reminder: these polls were taken after the fall of the Soviet Union and the capitalist transformation of China, in a country with no anti-capitalist party, where the mass media lauds the free market and suggests no alternatives, and where anti-communism was raised to an art form. This lack of faith in the dominant system of capitalism is mirrored worldwide. A BBC World Service poll, also from 2009, found that across twenty-seven (capitalist) countries, only 11 percent of the public thought free-market capitalism was working well. Asked if they thought that capitalism “is fatally flawed and a different economic system is needed,” 23 percent of the 29,000 people surveyed answered in the affirmative, with the proportion of discontents growing to 35 percent in Brazil, 38 percent in Mexico and 43 percent in France.15

My anti-capitalist friends are thrilled with these reports. Surely we’re waiting for the Great Leap Forward. I hate to remind them, however, that if the system is firmly in control, it no longer needs belief: it functions on routine…and the absence of imagination. That is to say, when ideology becomes truly hegemonic, you no longer need to believe. The reigning ideology is everything: the sun, the moon, the stars; there is simply nothing outside–no alternative–to imagine.16 Citizens no longer need to believe in or desire capitalism in order to go along with it, and dissatisfaction with the system, as long as it is leveled as a critique of the system rather than providing an alternative, matters little. Indeed, criticism of neoliberal capitalism is a part of the system itself–not as healthy check on power as many critics might like to believe, but as a demonstration of the sort of plurality necessary in a democratic age for complete hegemonic control.

I am reminded of the massive protests that flooded the streets before the US invasion of Iraq. On February 15, 2003 more than a million people marched in New York City, while nearly 10 million demonstrated worldwide. What was the response of then president George W. Bush? He calmly and publicly acknowledged the mass demonstration as a sign that the system was working, saying, “Democracy’s a beautiful thing … people are allowed to express their opinion, and I welcome people’s right to say what they believe.”17 This was spin and reframing, but it got at a fundamental truth. Bush needed the protest to make his case for a war of (Western) freedom and liberty vesus (Arab) repression and intolerance. Ironically, he also needed the protest to legitimize the war itself. In the modern imagination real wars always have dissent; now that Bush had a protest he had a genuine war. Although it pains me to admit this, especially as I helped organize the demonstration in New York, anti-war protest and critique has become an integral part of war.

When a system no longer needs to base its legitimacy on the conscious belief of its subjects– indeed, no longer has to legitimize itself at all–the critical move to debunk belief by revealing it as something based on lies no longer retains its intended political effect. This perspective is not universally recognized, as is confirmed by a quick perusal of oppositional periodicals, be they liberal or conservative. In each venue there will be criticisms of official truth and the positing of counter-truths. In each there exist a thousand young boys yelling out: “But he has no clothes!” To no avail. The de-bunking of belief may continue for eternity as a tired and impotent ritual of political subjectivity–something to make us think and feel as if we are really challenging power–but its importance and efficacy is nil.

Dystopia, Utopia’s doppelganger, speaks directly to the crisis in belief, for dystopias conjure up a world in which no one wants to believe. Like Utopias, dystopias are an image of an alternative world, but here the similarities end. Dystopian imaginaries, while positing a scenario set in the future, always return to the present with a critical impulse–suggesting what must be curtailed if the world is not to end up the way it is portrayed. Dystopia is therefore less an imagination of what might be than a revealing of the hidden logic of what already is. Confronted with a vision of our horrific future, dystopia’s audience is supposed to see the Truth–that our present course is leading us to the rocks of disaster–and, having woken up, now act. Dystopic faith in revelation and the power of the (hidden) truth makes common cause with traditional criticism and suffers the same liabilities.

Furthermore, the political response generated by dystopia is always is a conservative one: stop the so-called progress of civilization in its course and … and what? Where do we go from here? We do not know because we have neither been offered a vision of a world to hope for nor encouraged to believe that things could get better. In this way dystopias, even as they are often products of fertile imagination, deter imagination in others. The two options presented to the audience are either to accept the dystopic future as it is represented, or turn back to the present and keep this future from happening. In neither case is there a place for imagining a desirable alternative.

Finally, the desire encouraged through dystopic spectatorship is perverse. We seem to derive great satisfaction from vicariously experiencing our world destroyed by totalitarian politics, rapacious capitalism, runaway technology or ecological disaster, and dystopic scenarios–1984, Brave New World, Blade Runner, The Day After Tomorrow, The Matrix, 2012–have proved far more popular in our times than any comparable Utopic text. Contemplating the haunting beauty of dystopic art, like Robert Graves and Didier Madoc-Jones’s recent “London Futures” show at the Museum of London in which the capital of England lies serenely under seven meters of water, 18 brings to mind the famous phrase of Walter Benjamin, that our “self-alienation has reached such a degree that it can experience its own destruction as an aesthetic pleasure of the first order.”19While such dystopic visions are, no doubt, sincerely created to instigate collective action, I suspect what they really inspire is a sort of solitary satisfaction in hopelessness. In recent years a new word has entered our vocabulary to describe this very effect: “disasterbation.”20

So here we are, stuck between the Devil and the deep blue sea, with a decision to make. Either we drift about, leveling critiques with no critical effect and reveling in images of our impending destruction–living a life of political bad faith as we desire to make a difference yet don’t–or we approach the Devil. It is not much of a choice. If we want to change the world we need to abandon the political project of pure criticism and strike out in a new direction. That is, we need to make our peace with Utopia. This cannot happen by pretending that Utopia’s demons do not exist–creating a Utopia of Utopia; instead it means candidly acknowledging the problems with Utopia, and then deciding whether the ideal is still salvageable. This revaluation is essential, as it is one thing to conclude that criticism is politically impotent, but quite another to suggest that, in the long shadow of its horrors, we resurrect the project of Utopianism.

“Today we are people who know better, and that’s both a wonderful and terrible thing.”21 When Sam Green presents this line in his performance of Utopia in Four Movements it is meant as a sort of a lament that our knowledge of Utopia’s horrors cannot allow us ever again to have such grand dreams. This knowledge is wonderful in that there will be no large-scale atrocities in the name of idealism; it is terrible in that we no longer have the capacity to envision an alternative. But we needn’t be so pessimistic; perhaps “knowing better” offers us a perspective from which we can re-examine and re-approach the idea and ideal of Utopia. “Knowing better” allows us to ask questions that are essential if Utopia is to be a viable political project.

The paramount question, I believe, is whether or not Utopia can be opened up–to criticism, to participation, to modification, and to re-creation.22 It is only a Utopia like this that will be resistant to the ills that have plagued the project: its elite envisioning, its single-minded execution, and its unyielding manifestation. An Open Utopia that is democratic in its conception and protean in its realization gives us a chance to escape the nightmare of history and start imagining anew.

Another question must also be addressed: How is Utopia to come about? Utopia as a philosophical ideal or a literary text entails no input other than that of its author, and no commitment other than time and interest on the part of its readers; but Utopia as the basis of an alternative society requires the participation of its population. In the past people were forced to accept plans for an alternative society, but this is the past we are trying to escape. If we reject the anti-democratic, politics-from-above model that has haunted past Utopias, can the public be persuaded to ponder such radical alternatives themselves? In short, now that “we are people who know better,” can we be convinced to give Utopia another chance?

These are vexing questions. Their answers, however, have been there all along, from the very beginning, in Thomas More’s Utopia.

When More wrote Utopia in the early sixteenth century he was not the first writer to have imagined a better world. The author owed a heavy literary debt to Plato’s Republic wherein Socrates lays out his blueprint for a just society. But he was also influenced by the political and social imaginings of classic authors like Plutarch, Sallust, Tacitus, Cicero and Seneca, with all of whom an erudite Renaissance Humanist like More would have been on intimate terms. The ideal of a far-off land operating according to foreign, and often alluring, principles was also a stock-in-trade in the tales of travel popular at the time. The travelogues of Sir John Mandeville were bestsellers (albeit amongst a limited literate class) in the fourteenth century, and adventurer’s tales, like those of the late fifteenth and early sixteenth-century explorer Amerigo Vespucci, were familiar to More. Most important, the Biblethe master-text of More’s European home–provided images of mythical-historical lands flowing with milk and honey, and glimpses of a world beyond where the lion lays down with the lamb.

By the time More sat down to write his book, envisioning alternative worlds was a well-worn literary tradition, nut Utopia literally named the practice. One need not have read his book, nor even know that such a book exists, to be familiar with the word, and “Utopia” has entered the popular lexicon to represent almost any positive ideal of a society. But, given how commonly the word is used and how widely it is applied, Utopia is an exceedingly curious book, and much less straightforward than one might think.

Utopia is actually two books, written separately and published together in 1516 (along with a great deal of ancillary material: maps, marginalia, and dedications contributed by members of the Renaissance Europe’s literary establishment). Book I is the story of More meeting and entering into a discussion with the traveler Raphael Hythloday; Book II is Hythloday’s description of the land to which he has traveled–the Isle of Utopia. Scholars disagree about exactly how much of Book I was in More’s mind when he wrote Book II, but all agree that Book II was written first in 1515 while the author was waiting around on a futile diplomatic mission in the Netherlands, and Book I was written a year later in his home in London.23 Chronology of creation aside, the reader of Utopia encounters Book I before Book II, so this is how we too shall start.

Book I of Utopia opens with More introducing himself as a character and taking on the role of narrator. He tells the reader that he has been sent to Flanders on a diplomatic mission for the king of England, and introduces us to his friend Peter Giles, who is living in Antwerp. All this is based in fact: More was sent on such a mission by Henry VIII in 1515 and Peter Giles, in addition to being the author’s friend, was a well-known Flemish literary figure. Soon, however, More mixes fiction into his facts by describing a meeting with Raphael Hythloday, “a stranger, who seemed past the flower of his age; his face was tanned, he had a long beard, and his cloak was hanging carelessly about him, so that, by his looks and habit, I concluded he was a seaman.” While the description is vivid and matter-of-fact, there are hints that this might not be the type of voyager who solely navigates the material plane. Giles explains to More that Hythloday “has not sailed as a seaman, but as a traveler, or rather a philosopher.” Yet it is revealed a few lines later that the (fictional) traveler has been in the company of the (factual) explorer Amerigo Vespucci, whose party he left to venture off and discover the (fictional) Island of Utopia. This promiscuous mix of reality and fantasy sets the tone for Utopia. From the beginning we, the readers, are thrown off balance: Who and what should we take seriously?

Returning to the story: introductions are made, and the three men strike up a conversation. The discussion turns to More’s native country, and Hythloday describes a (fictional) dinner conversation at the home of (the factual) John Morton, Catholic Cardinal, Archbishop of Canterbury, and Lord Chancellor of England, on the harsh laws of England which, at the time, condemned persons to death for the most minor of crimes. At the dinner party Hythloday assumes the role of critic, arguing against such laws in particular and the death penalty in general. He begins by insisting that crime must be understood and addressed at a societal level. Inheritance laws, for instance, leave all heirs but the first son property-less, and thus financially desperate. Standing armies and frequent wars result in the presence of violent and restless soldiers, who move easily into crime; and the enclosure of once common lands forces commoners to criminal measures to supplement their livelihood. Hythloday then finds a fault in juridical logic. Enforcing the death penalty for minor crimes, he points out, only encourages major ones, as the petty thief might as well kill their victim as have them survive as a possible witness. Turning his attention upward, Hythloday then claims that capital punishment is hubris against the Divine, for only God has the right to take a human life. Having thus argued for a sense of justice grounded on earth as well as in the heavens, he concludes: “If you do not find a remedy to these evils it is a vain thing to boast of the severity in punishing theft, which, though it might have the appearance of justice, yet in itself is neither just nor convenient.” It is a blistering critique and a persuasive performance.

The crowd around the archbishop’s dinner table, however, is not persuaded. A lawyer present immediately replies with a pedantic non-reply that merely sums up Hythloday’s arguments. A fool makes a foolish suggestion, trolling only for laughs. And a Friar, the butt of the fool’s jokes, becomes indignant and begins quoting scripture willy-nilly to justify his outrage, engaging in tit-for-tat with the fool and thus derailing the discussion entirely. The only person Hythloday seems to reach is Morton, who adds his own ideas about the proper treatment of vagabonds. But this thoughtful contribution, too, is devalued when the company assembled–motivated not by logic but by sycophancy–slavishly agree with the archbishop. As a Socratic dialogue, a model More no doubt had in mind, the dinner party discussion bombs. Hythloday convinces no one with his logic, fails to engage all but one of his interlocutors, and moves us no closer to the Platonic ideal of Justice. In short, Hythloday, as a critic, is ineffectual.

And not for the only time. Hythloday makes another critical intervention later in Book I, this time making his case directly to More and Giles. Here the topic is private property, which Hythloday believes to be at the root of all society’s ills, crime included. “I must freely own,” he reasons, “that as long as there is any property, and while money is the standard of all other things, I cannot think that a nation can be governed either justly or happily …” Alas, while Hythloday has convinced himself, he is the only one, for there are no ears for his thoughts. More immediately counters with the oft-heard argument that without property to gain and inequality as a spur, humans will become lazy, and Giles responds with a proto-Burkean defense of tradition. Again, Hythloday’s attempts at critical persuasion fail.

Hythloday concludes that critical engagement is pointless. And when More suggests that he, with his broad experience and strong opinions, become a court counselor, Hythloday dismisses the idea. Europeans, he argues, are resistant to new ideas. Princes are deaf to philosophy and are more concerned with making war than hearing ideals for peace. And courts are filled with men who admire only their own ideas and are envious of others. More, himself unconvinced by Hythloday up until now, finally agrees with him. “One is never to offer propositions or advice that we are certain will not be entertained,” he concurs, adding that, “Discourses so much out of the road could not avail anything, nor have any effect on men whose minds were prepossessed with different sentiments.”

But More does not counsel despair and disengagement–he suggests an alternative strategy of persuasion. The problem is not with Hythloday’s arguments themselves, but with the form in which he presents them. One cannot simply present radical ideas that challenge people’s basic assumptions about the world in the form of a reasoned argument, for no one wants to be told they are wrong. “There is another philosophy,” More explains, “that is more pliable, that knows its proper scene, [and] accommodates itself to it.” He goes on to use the example of drama, explaining how an actor must adapt to the language and the setting of the play if his lines are to make sense to the audience. If the drama is a light comedy, More explains, then it makes little sense to play one’s part as if it were a serious tragedy, “For you spoil and corrupt the play that is in hand when you mix with it things of an opposite nature, even though they are much better. Therefore,” he continues, “go through the play that is acting the best you can, and do not confound it because another that is pleasanter comes into your thoughts.”

More makes it clear that his dramaturgical advice is meant to be taken politically. He tells Hythloday: “You are not obliged to assault people with discourses that are out of their road when you see that their received notions must prevent your making an impression on them.” Instead, he counsels, “you ought rather to cast about and to manage things with all the dexterity in your power.” This time, however, it is Hythloday’s turn to be unswayed by argument. He interprets More’s proposal as an invitation to dissemble and rejects it forthwith: “as for lying, whether a philosopher can do it or not I cannot tell: I am sure I cannot do it.”

This revealing exchange may be understood in several ways. The most common reading among Utopiascholars is that More’s advice to Hythloday is an argument for working within the system, to “go through with the play that is acting the best you can,” and to abandon a confrontational style of criticism in favor of “another philosophy that is more pliable, that knows its proper scene, [and] accommodates itself.” To be successful, More seems to counsel, one must cast oneself within “the play that is acting, ” that is, the status quo, and “accommodate” one’s ideas to the dominant discourse. Shortly before writing Utopia, More had been asked by Henry VIII to enter his service as a counselor and he was still contemplating the offer while at work on the book. It is thus easy to imagine this whole discussion as a debate of sorts within his own head. More’s conclusion–that to be effective one needs to put aside the high-minded posturing of the critic and embrace the pliability of politics–can be understood as an early rationalization for his own decision to join the King’s council two years later, in 1518.24 (A decision that was literally to cost the man his head in 1535, when he–high-mindedly–refused to bless Henry VIII’s divorce and split from the Catholic Church). Another popular interpretation of this passage proposes that More is merely trotting out the standard classical arguments in defense of the practice of rhetoric: know your audience, cater to their preferences, and so forth.25 Hythloday, in turn, gives the classic rebuttal: the Truth is fixed and eternal. It is the debate between Aristotle in the Rhetoric and Plato in Gorgias, retold.

While not discounting either of these interpretations, I want to suggest another: that More–the character and the author–is making a case for the political futility of direct criticism. What he calls for in its place is a technique of persuasion that circumvents the obstacles that Hythloday describes: tradition, narrow-mindedness, and a simple resistance on the part of the interlocutor to being told what to think. More knows that, while the critic may be correct, their criticism can often fall on deaf ears–as it did in all of Hythloday’s attempts. What is needed is another model of political discourse; not rhetoric with its moral relativity, nor simply altering one’s opinions so they are acceptable to those in power, but something else entirely. Where is this alternative to be found? Answering this question entails taking More’s dramatic metaphor seriously.

The play’s the thing. What drama doesis create a counter-world to the here and now. Plays fashion a space and place which can look and feel like reality yet is not beholden to its limitations, it is, literally, a stage on which imagination becomes reality. A successful play, according to the Aristotelian logic with which More would have been familiar, is one in which the audience loses themselves in the drama: its world becomes theirs. The world of the play is experienced and internalized and thus, to a certain degree and for a limited amount of time, naturalized. The alternative becomes the norm. Whereas alternatives presented through criticism are often experienced by the audience as external to the dominant logic, as “discourses that are out of their road,” the same arguments advanced within the alternative reality of the play become the dominant logic. Importantly, this logic is not merely approached cognitively, as set of abstract precepts, but experienced viscerally, albeit vicariously, as a set of principles put into practice.26

What works on the stage might also serve in the stateroom. By presenting views at odds with the norm the critic begins at a disadvantage; he or she is the perpetual outsider, always operating from the margins, trying to convince people that what they know as the Truth might be false, and what they hold to be reality is just one perspective among many. This marginal position not only renders persuasion more difficult but, paradoxically, reinforces the centrality of the norm. The margins, by very definition, are bound to the center, and the critic, in their act of criticism, re-inscribes the importance of the world they take issue with. Compared to the critic, the courtier has an easier time of it. The courtier, as a yes man, operates within the boundaries of accepted reality. They needn’t make reasoned appeals to the intellect at all, they merely restate the “obvious”: what is already felt, known and experienced. The courtier has no interest in offering an alternative or even providing genuine advice; their function is merely to reinforce the status quo.

“Casting about,” or the “indirect approach” as it is elsewhere translated,27 provides More with a third position that transcends critic and courtier–one that allows an individual to offer critical advice without being confined to the margins. Instead of countering reality as the critic does, or accepting a reality already given like the courtier, this person creates their own reality. This individual–let us call them an artist–conjures up a full-blown lifeworld that operates according to a different axioms. Like Hamlet staging the murder of his father before an audience of the court and the eyes of his treacherous uncle, the artist maneuvers the spectator into a position where they see their world in a new light. The persuasive advantages of this strategy should be obvious. Instead of being the outsider convincing people that what they know to be right is wrong, the artist creates a new context for what is right and lets people experience it for themselves. Instead of negating reality, they create a new one. No longer an outsider, this artist occupies the center stage in their own creation, imagining and then describing a place where their ideals already exist, and then inviting their audience to experience it with them. Book I– a damning critique of direct criticism–ends with this more hopeful hint at an alternative model of persuasion. Book II is More’s demonstration of this technique; his political artistry in practice.

The second book of Utopia begins with Raphael Hythloday taking over the role of narrator and, like the first book, opens with a detailed description of the setting in order to situate the reader. Unlike the real Flanders described by More in Book I, however, the location that Hyth