April Fools: Church of the Universal Ego

Tags

, , , , ,

It’s April 1st once more, and I think we all know what that means!

Bingo! It’s the annual Feast of Saint Cellach of Armagh! And as we all contemplate the life and legacy of this Irish Archbishop of the 12th century canonized by the Roman Catholic Church, what better time to announce my new spiritual awakening!

I awoke this morning with a fire in my heart, a bruise on my forehead and a copy of Atlas Shrugged beside me on the carpet where I apparently passed out. I don’t remember what happened last night, but whatever it was left me inspired with religious fervor!

After all, Ayn Rand worshipped the egoism and megalomania of self-styled Übermenschen, and argues that as the foremost creative and productive force in society they should have near unlimited license and entitlement to force others to comply with their uncompromising personal vision. And who embodies that idea better than the over-over-over-overman himself, God?

This was the revelation that led me to found a new religion, based on a new and superior interpretation of the Bible!

Ladies and gentlemen, I introduce to you the Church of the Universal Ego! We (and that’s the royal “we”) are a a Christian church and non-denominational at least until we make it big. And the tenants of our new faith are provided for your digestion below!

  • Supreme Being: The Church of the Universal Ego believes in one divine Creator of heaven and earth, and like most creative people, the Creator has egocentric tendencies. We ascribe the existence and meaning of mankind to the essence of the Creator which we call the Universal Ego, and it is our divinely ordained purpose to love and worship the Creator as our heavenly father, lest we offend the Creator’s considerable self-regard and incur the Creator’s wrath.
  • Creation: We believe that the Creator made the earth in seven days as part of a vanity project that He pursued out of boredom. The Creator spent the first five days narrowly focused on the worldbuilding which left Him exhausted, and when He recognized He needed to add some central characters He made mankind in His own likeness in a low-effort way to identify with them and make them relatable.
  • Original Sin: The Church of the Universal Ego subscribes to the doctrine of Original Sin, which mankind incurred upon themselves by disobeying the Creator and eating the fruit of the Tree of Knowledge, making them unworthy to stay in the Garden of Eden. Now, if you’re wondering why the Creator would put the Tree of Knowledge there in the first place, we believe that after creating the characters and setting, the Creator noticed there wasn’t really much of a story to his story, and the result was unengaging. He introduced the Tree of Knowledge to create a source of conflict, and tainted the fruit with Original Sin to give mankind some character flaws that would make them more compelling and to further drive the plot.
  • Problem of Evil: Traditionally the problem of evil asks why an omnipotent and benevolent deity would tolerate the existence of evil and suffering in mankind. This question, which has puzzled philosophers and theologians for centuries, is fully resolved by the doctrine of the Universal Ego. The Creator isn’t a benevolent deity per se; He enjoys performing as a benevolent deity for creative purposes. The universe was conceived as part of a wish-fulfillment project where the Creator performs as a self-insert author surrogate character – an idealized version of Himself which is almighty, all-loving, liked by everyone and the all-around best. The Creator made the human race to love and worship Him, and to do so we must embrace His central dichotomy – that He is Creator-as-author and Creator-as-character. We must show Him gratitude when He offers salvation to us from the sin in the world which He is directly responsible for, and rescues our souls in the end times He will inevitably contrive. And we must accept the great mystery of our faith in which Creator we happen to stumble across based on His mood.
  • The Bible: We believe that the Old and New Testament of the bible was written with the divine inspiration of the Creator. If you’re wondering why the Old and New Testament have such dramatically different tones and messages, the doctrine of the Universal Ego dictates that the Creator came up with the idea of New Testament when He realized that His creation had serious plot problems. The most important of these problems was that the world revolved around the Creator, but the Creator’s character was also inaccessible. He was (at least for story purposes) all-loving and all-powerful, but the admiration mankind felt for Him was still impersonal and esoteric. People could respect His unknowable, almighty spiritual power, but they couldn’t really relate to that faceless perfection. To resolve this problem, the Creator decided to take the story in a bold new creative direction, building off the framework He had already established and introducing a reimagined version of Himself that everyone could fall in love with. This is the reboot we now call the New Testament. The Church of the Universal Ego takes no position on the longstanding fan wars of which Testament was better. Suffice it to say, the original had the grander epic setpieces and dramatic moments, but the reboot had a more heartfelt, personal and inspirational story. In short, we say that it comes down to a matter of personal taste.
  • Jesus Christ: As a Christian Church, we believe in Jesus Christ, the Son and savior of mankind. We believe the Creator begat Jesus Christ in the flesh as way to reinvent Himself and His character and add personality to his divinity. Indeed, the Creator took considerable care to make sure Jesus was as broadly appealing as possible – a lovable, caring guy; prim and chaste, but also genial and unjudgmental; a philosopher but also a simple man who worked with his hands; a morally upstanding devout preacher who would also turn water into wine to spice up a wedding; a refined man with royal blood who still possessed down-to-earth concern for the poor and down-trodden; a man of radical social beliefs who is yet unthreatening to conservative sentiments; a divine superhuman who wielded all the power of the world, and yet a vulnerable human being who could suffer pain and even die for the audience’s sympathy. He could inspire admiration and fear, urgency and comfort,  awe and sorrow – virtually any emotion the audience needed. The Church of the Universal Ego contends that the message of Jesus Christ is universal and for all peoples – if it wasn’t, the Creator wouldn’t have workshopped Jesus so hard to make the character such an instant classic for everyone.
  • The Trinity: The Church of the Universal Ego subscribes to the doctrine of the Holy Trinity, though not as outlined in the Apostolic Councils. Our doctrine dictates that the Trinity was a necessary canonical explanation for major plot holes that were exacerbated by the introduction of the Jesus Christ character. In concept, it was simple enough to state that Jesus was the son of God; but eventually the Creator had to explain how Jesus needed to be worshipped as if he were God, and wasn’t lesser than God, and how that could be the case if there were really one and only one God. The solution? The Trinity – an inspired retcon that allowed the Creator to represent Himself as the Father, the Son and the Holy Spirit whenever it suited the plot. We think of the Father, the Son and the Holy Spirit as different ‘personas’ which the Creator will put on – a bit like an actor who plays different roles, but also different roles that all represent the same character. Got it? If you don’t understand it, that’s alright – part of the convenience of the Trinity is that it’s reasonable enough at face value to be acceptable to a casual audience, while being too technically complicated for them to take interest in understanding it fully. The only people who care about the details are bishops, doctrinaires, theologians and other superfans – and they’re already invested in the story canon after all. The Creator surely appreciates their enthusiasm to try and fill in the details themselves, although the bloodshed over this issue was certainly regrettable. 
They may look similar, but one is a demanding overbearing creative perfectionist, and the other is Stanley Kubrick.

The Church of the Universal Ego wants you to know that whoever you are, wherever you are, you have a place in creation; and there is a Creator out there who loves you and has a plan for you and a part for you to play.

And you better play that part and play it well – ‘cuz if you don’t, that same Creator will become a cold-blooded auteur, and He will take note, judge you, and punish you passive-aggressively – or worst-of-all, break character. You will not like Him when He breaks character…

There will be hell to pay,

Connor Raikes, a.k.a. Raikespeare

Top 10 Biggest Misconceptions about the French Revolution

Happy Bastille Day to all French people and Francophiles reading my blog that has conspicuously little to do with France.

Today is the 230th anniversary of the storming of the Bastille, a date which serves as a focal point for popular knowledge and interest of the French Revolution – without hyperbole, one of the most important historical events in the history of Europe, and perhaps the world. And at the same time, it is one of the most chronically misunderstood events in European history.

I’m not talking strictly about small details, like whether or not Marie Antoinette said “let them eat cake” (she didn’t) or how many people were actually locked in the Bastille (7). At pretty much every scale, from the small details to the broad overview and all in between, the discourse around the French Revolution is riddled with (and often dominated by) serious falsehoods about events, people and goals. As a result, most people outside of France are not only mistaken about why the French Revolution was important; they barely recognize it as the seminal, era-defining event it was.

Now, to clarify, the French Revolution is complicated. It’s nigh impossible to create an easily digestible narrative that properly conveys all the ideologies, factions and personalities involved, and how those interacted and evolved as the revolution progressed. Even professional historians who have dedicated their lives to studying it have serious disagreements about the basic causes and effects of the revolution.

Even so, I have a problem with the popular narrative of the French Revolution. Not because it is reductive, or shallow; I’m mean, that’s how popular narratives work. I have a problem because, well, there’s so much about it that is wrong. Even educators repeat falsehoods while only challenging a discrete handful of them. Otherwise, they try to reconcile the historical facts with a popular narrative and framing that is still fundamentally flawed. And if the events themselves are misrepresented, of course people are going to take away the wrong lessons.

This is where we get the simple, moralized narrative of the French Revolution that we see elevated and parodied in pop culture: the rich were greedy, the peasants were hungry, so they overthrew the king, but then it created chaos, and radicals started guillotining the nobles, and then something something Napoleon. If you have heard anything about the French Revolution, you have probably heard a version of this narrative.

I can see the appeal; if you look at it from a kilometer away (or should I say, millaire, super-Revolution nerds??) it resembles what happened, like how a manatee kinda looks like a mermaid. And conveniently, whether you’re a hardline right-winger or radical leftist or committed centrist, there’s a favorable moral for everybody to take away, that’s reassuring to your cause!

But – as I asserted before – that narrative is largely inaccurate. And the conclusions people draw from it are tainted by the inaccuracy.

What irks me is that I find the French Revolution so rich in intrigue, and the historical insights drawn from it so profound, that it bugs me to hear the same misconceptions overshadow the history itself. The real French Revolution is not so simple, but it’s more gratifying, compelling, and above all human, which constantly makes itself relevant in the modern world it largely shaped.


So on this 230th Anniversary of the Storming of the Bastille,  I will go through the big misconceptions of the French Revolution, and dissect them one by one. Starting with:

MYTH: The Storming of the Bastille was the start of the French Revolution, as well as its key event

Let’s get the big ironic one out of the way first. The Storming of the Bastille on July 14th, 1789 is usually portrayed as the starting point of the revolution and its defining moment. After all, it is celebrated as the National Day of France. But by no means was it the most important or revolutionary moment of the period. Hell, I’d say it wasn’t even the most important or revolutionary moment of 1789! But probably top three for the year. Bronze isn’t bad…

Hell of a bronze, though… it was a tough competition.

The Storming of the Bastille gets a lot of attention because of its symbolism, and its resemblance to our popular conception of the French Revolution. But though it did contribute to the unraveling of French political order, its impact in that regard was secondary.

The Bastille was a medieval fortress, and a political prison. On the day of the attack, it was manned by a Swiss regiment under the command of a tactless nobleman who inherited the title from his father. The people overthrowing it were Parisian partisans and a few persuaded French Guardsmen. And yes, the armed crowd literally cut off said nobleman’s head and paraded it around on the streets on a pike. (And this was before the guillotine!)

You can see why its symbolic; it’s like French Revolution extract. The building simultaneously represented the gloomy remnant of feudal domination and the more contemporary tyranny of oppressing political dissidents. The garrison represented the aristocrats’ cooperation with foreign powers and distrust of the commoners. And the result seemed to presage the political violence to come, and its focus on separating heads from shoulders!

But was it the start?

Now, claiming a start to an event as messy and complicated as the French Revolution is always difficult, and you can make a case for a lot of dates. Even so, the Storming of the Bastille didn’t start the unraveling of France’s political order. The Estates-General started in early May. The representatives of the Third Estate already split off into the National Assembly. And they already declared their intention to overhaul the French state and remake the constitution during the Tennis Court Oath – so there’s already a large organized resistance in open (albeit political) opposition to the King’s administration.

Was it the first armed insurrection of the period? Not really. The Day of the Tiles happened over a year earlier, when commonfolk in Grenoble tried to call for an impromptu parliamentary assembly and had to be put down by force. The Réveillon riots, the first riot in Paris related to the Revolution, happened in April 1789, before the Estates-General.

That said, you could argue that the Storming of the Bastille was the first time that the popular insurrection was aligned with the ongoing movement to remake the French political system, and the first time that the insurgents decisively won their encounter against the forces of the status quo – and in that sense, it was very important. So what did it change?

Well, they didn’t free any high profile political prisoners. As many like to point out, there were only seven people held in the Bastille at the time, and only one who was a true political prisoner: a man who tried to assassinate the French king 30 years earlier. So… not exactly a prisoner of conscience. The rest were four forgers, one lunatic, and one Aristocrat who was imprisoned via lettre de cachet for “sexual deviancy” (probably incest).

Then again, what the Bastille stormers really wanted was the gunpowder held for the Bastille’s garrison. Earlier that morning, they apprehended guns from the large veteran hospital of Paris and they needed a powder supply to properly arm themselves – and hopefully to acquire it in the most stirring, symbolically resonant way possible.

But the most important outcome wasn’t what happened that day; it was how people reacted to it.

It caused Louis XVI to panic and back down – pulling foreign mercenaries from Paris and giving his blessing to what would become the French National Guard. The National Assembly had hard power to back up their claims to legitimacy. And it became a symbol to rally revolutionaries around the power of the commonpeople.

But did it transition the Revolution from a legal/reform-minded one into a more violent radical one? Again, not really. The conflicts of the revolution remained political and legislative in nature and would remain so until at least the Champ de Mars Massacre, and really until the declaration of war with Austria.

The Storming of the Bastille was important mainly in the way it aligned with, empowered and contributed to more tangible political movements that were already going, not strictly what it did on its own. In my humble view, the Tennis Court Oath was more important and more revolutionary, because it demonstrated a firm commitment by France’s most representative body to date to dismantle and rebuild the French political order. Later that year, the Abolition of Feudalism was a massive change to the French state and the relationships between its classes, and the Declaration of the Rights of Man and Citizen was among the most influential political documents ever produced.

The Tennis Court Oath was the most overwhelming display of power in a French tennis court until Rafael Nadal.

The conception that the Storming of the Bastille was more important than it actually was stems in part from the Fête de la Fédération (“Festival of the Federation”) that happened a year later on the anniversary of the Storming, making it the predecessor to modern Bastille Day. This festival was organized by the National Constituent Assembly in part because 1790 was a relatively quiet year all things considered, and the centrist monarchiens block that held power at that moment wanted the Revolution to end. So they put on a big festival for the populace to signal that the Revolution was over, the people had won, a limited constitutional monarchy was what they wanted all along, and now they can kick back and celebrate, and maybe please don’t try making the revolution go any further please and thank you… And for that goal, the Storming of the Bastille was a good event to highlight the triumph of the ordinary citizens.

So, making the Storming of the Bastille the start and center-point of the French Revolution sorta stemmed from Monarchist propaganda. Take that as you will.

As for when the Revolution started, I’m a little more maximalist than most; I submit the rarely remarked upon day of August 20th, 1786 as the date when the French Revolution truly began: the date when controller-general of France Charles Alexandre de Calonne informed Louis XVI that the French government was bankrupt, and there was virtually no way to pay off its debts without strident political reforms that curtailed the privileges of the nobility and the church.

Everything else that happened in the French Revolution emerged from, and is inseparable from, this moment. It was the point of no return when it became clear that the status quo was untenable and the political order of France needed to change.

Speaking of which, as they’d come up I’d like to highlight a few facts that people keep ignoring or dismissing. Such as:

Ignored Fact #1: France is Bankrupt

You ever look at the quick escalation of the political crisis in France and think to yourself, “What the hell is the King doing? What are the nobles doing just sitting around on their fat lazy asses doing nothing while a bunch of bourgeois lawyers aggressively take away their power and privileges? They coulda picked any moment to call in the military, force these upstarts out and put this revolution to rest! They coulda provided for the poor to curry popular favor and weaken the opposition! Why the negligence? Why the passivity? Are they out of touch or are just stupid?!”

To answer your question, the King and the nobles may have wanted to force the reformers out of power and nip the revolution in the bud – but their hands were tied. Why? Because France is bankrupt.

First of all, even if they could levy a force that could overthrow the reformers and keep the peace, how were they going to pay for it? That’s another IOU to add to the pile, just to return to a status quo where France’s finances are still deep in the red. As much as they hated those reformers, they actually needed those reforms.

For another, many of the financiers France was indebted to were upstart bourgeois capital holders who were pretty much on the side of the constitutionalists. Best not alienate the debt collector!

France’s bankruptcy is usually mentioned as the reason for the calling of the Estates-General, but it’s much more than that; it means that for the window of time decisive political action could have prevented the Revolution, Louis XVI didn’t have the liquidity to act. The brake-pads were broken. Louis XVI couldn’t confront the Revolution directly without imperiling the financial future of the country.

MYTH: Louis XVI was an extraordinarily bad king who didn’t care about the French people

Louis XVI certainly wasn’t a Sun King-level statesman and personality like his great-great-great-grandfather, Louis XIV. He himself fretted over his inadequacy to deal with France’s many crises. But let’s keep in mind that he wasn’t a uniquely bad monarch. He was intelligent enough, well-meaning and largely sympathetic to the French people. Prior to the Revolution, he demonstrated religious tolerance and an enthusiasm for foreign policy – an enthusiasm which helped get the United States bankrolled, by the way! And remember – during the revolution, he was dealing with a political crisis with virtually no precedent, navigating uncharted territories.

In a different time, with peace and stability, he might have made a good, beloved king. Unfortunately, what France needed more than anything was the one personality trait he was woefully deficient in: confident, visionary leadership. He either needed to be a firm tactful conservative who could hold the revolution at bay, or a bold reforming enlightened monarch who could embrace the revolution and bend it to his will. Instead, he was a weakwilled, vacillating, insecure, indecisive king who wavered from limp complaint with the revolution to begrudging acceptance of it to flailing denouncement of it.

Even so, Louis XVI is often portrayed as a bad king – a bumbling, foolish, cartoonishly clueless, occasionally even malevolent king who didn’t care about the plight of his people. This isn’t the case: he was actually desperate for his people’s love and admiration. He probably wanted to do more to help alleviate some of the poverty, but he had little room to do so, because France is Bankrupt.

Side note: The History of the World Pt. 1 portrays Louis XVI as a slimy horndog, which… umm, is really ironic, considering almost the exact opposite was true. Louis XVI and Marie Antoinette actually caused a small public uproar because of their difficulty in consummating their marriage. Louis was cold and distant, he distrusted Marie Antoinette, and for reasons that are debated by historians, Louis XVI couldn’t get it up. Some speculate that he had a physical deformity, or that he did not have a strong libido. Either way, it took a long time for the royal couple to get physically intimate. It was all the more scandalous, because Marie Antoinette was considered an attractive, enticing woman. I know sexual profligacy is an easy narrative shorthand for debauchery and hedonism during the ancien régime but still… 

MYTH: Marie Antoinette was a spendthrift uninvolved ditz

Marie Antoinette is very sorely maligned by cultural retelling, deprived of agency and reduced to a historical punchline. She is variously portrayed as the embodiment of the elite’s wasteful extravagance, or as the unwitting, ill-fated pretty-faced victim of the revolution. Sorely neglected is her pivotal role as a political adviser to Louis XVI, a vocal reactionary, and above all, a communication channel with the Holy Roman Empire.

Remember: Marie Antoinette wasn’t just any queen consort; she was a Habsburg. Her mother was Empress Maria Teresa; her two brothers and nephew served as Holy Roman Emperors through the course of the revolution. She was the tie that bound the French royal family to royal families elsewhere in Europe, and gave them a stake in the outcome of the Revolution. The Austrians raised armies to oppose the French Revolution not just out of ideological opposition; they wanted to save a member of their royal family. They were pressured into military action by Marie Antoinette herself. Her advocacy and familial ties helped inspire the political opposition that France would later face, which would push them in a much more militant, radical direction – but more on that later.

There are waaay too many political cartoons of the era about Marie Antoinette featuring graphic genitalia… consider this a compromise.

Truth be told, she was unjustly maligned even before the revolution. As a Habsburg and an Austrian she embodied the illiberal traditional conservatism of the Holy Roman Empire, and reform-minded citizens had a vested interest in tarring what she represented. Pre-revolution pamphlets portrayed her as an insatiable whore with lavish appetites. They mocked the difficulty of the royal couple to produce an heir. And her reputation for greed and materialism largely stems from the fallout surrounding a scandal known as the Diamond Necklace Affair, in which confidence tricksters tried to frame the Queen in a fraudulent scheme to sell a diamond necklace. She was almost certainly innocent, and yet hers was the reputation that suffered due to gossip and propaganda by her political enemies.

The fact that she was not French, and came from an old political rival and sometime enemy of French power, made her the perfect scapegoat. And the scurrilous image they created of her overshadowed her active contributions to the politics of the time, pulling Louis XVI away from the revolutionary movement and toward a more hostile conservative stance – for better or worse.

Also just to be clear, she never said let them eat cake. It’s a line from Jean-Jacques Rousseau, dammit! (I think we know this, but just a reminder.)

MYTH: Louis XVI was an absolute monarch without any constraints on his power, and everyone who wanted to limit his power was a liberal reformer/radical.

Calling Louis XVI an “absolute monarch” is either inaccurate or very misleading. Absolute Monarchy is supposedly a system where the king’s authority isn’t constrained by any laws, and you could argue that Louis XVI was technically an absolute monarch insofar as he was not bound by a constitutional system like the British. But that does not mean that Louis XVI was without legal or political constraints. He did not have as much control over the state as, say, Napoleon over the French Empire.

While he was not limited by a parliamentary system, King Louis XVI was constrained by France’s byzantine decentralized governance structure, a remnant of the feudal era that was increasingly ill-equipped to run a modern bureaucratic state.

To briefly summarize the entire Middle Ages: Medieval feudal governance was highly decentralized. European nobility had much greater de facto political autonomy over their land than the king, so it made sense for kings to legitimize the nobles’ autonomy and give them legal privileges to match. Nobles had more leverage over levying troops in their territory than the king, and more control over the local economy which was more reliant on the local relationship of peasant and lord. And militarily, nobles had the ultimate trump card: medieval warfare was dominated by heavy cavalry, manned by the only class who could afford the expense. And if a good portion of that nobility decided to switch to the side of, say, another claimant to the throne, or the monarch next door, it would deal a devastating blow to the king’s fighting strength in a civil conflict.

Therefore, Feudalism assumed that the king had to appease the nobility to secure his grip on power, while trying to monopolize an untouchable divine legitimacy on his own which the nobles couldn’t claim. This is where those vestigial hereditary privileges of the French nobility came from: they were leftover from a medieval era gripped by struggles between lords rather than nations. In this environment, it was not unreasonable to give nobles the right to pay little or no tax; their loyalty and promise to fight for the king in battle was much more valuable than their source as public revenue.

Fast forward some three hundred years to late 18th century France, and it’s clear that such a legal/political framework no longer applied. The economy and military now had to be run at the national scale. The demands of a growing economy of entrepreneurship and international trade demanded coherent laws and regulations enforced across provincial borders. Military administration transitioned to professional, standing armies, run at the national level and expense. Add on top of that the demands of managing a transatlantic colonial empire, as in the case of France. Perhaps you can see why exempting the nobility of paying taxes was suddenly much more burdensome – and why placing their privileges at the center of France’s political system became an albatross around the king’s bloated neck.

Previous French Bourbon kings, particularly the “Sun King” Louis XIV, made a hard, gradual push to centralize the management of the French state. Even so, the Bourbons still had to work within the foundations of France’s feudal legal system.

This is a good way to introduce two of the more spectacular overlooked ironies of the French Revolution: first, the French Revolution arguably occurred because Louis XVI didn’t have enough political power; and second, the people who arguably first put the revolution in motion by opposing the King were not the liberal reformers who would make up the National Assembly, but rather the arch-conservative Parlements made up of wealthy privileged nobles, who resisted controller-general Calonne’s reform package and torpedoed Louis XVI’s attempt to reform the state outright.

You can think of the Parlements as something between a federal state supreme court and a country club. They were designed to resolve local legal disputes and preserve the rights and privileges of the nobility, and one of their key prerogatives was over tax law: any new tax law had to be registered with the Parlements, and they had veto power over their own province. So if, say, the nobility who ran the Parlements didn’t want to pay land taxes, they could just decide not to.

Calonne’s last ditch effort to save his reform plan was calling an “Assembly of Notables”, a group of high ranking nobles and clergymen – many of them members of the Parlements – and pleaded with them to support the reform plan of the King’s administration. And the Assembly replied with a resounding no – we don’t support this plan, and the King does not have the authority to impose it on us unless he calls for an Estates-General.

The French Revolution was a gradual escalation of conflict between various factions, and the conflict that started it off was this one – and Louis XVI couldn’t stop it, precisely because his power was not unlimited. Maybe if Louis XVI had been a true absolute monarch in the vein of Frederick the Great, he might have been able to head the revolution off at the first pass. But he couldn’t enact the reforms that he and his controller-general Calonne wanted. Their inability to do so allowed opposition toward their administration to escalate into a fullblown revolution.

MYTH: “Peasants” were the main supporters of the revolution

Ah yes, “Peasants”. There is a misguided tendency for modern perspective to place any poor person before 1800 into a giant bucket labelled “Peasant”. But by the time of the French Revolution, with the Industrial Revolution either nascent or on the horizon, and economic trends already leading to greater urbanization, the already dubious notion that all poor people were peasants was becoming less and less true.

Setting aside the technical classifications of serfdom and Feudalism which were barely hanging on as an institution in the Kingdom of France – even before the Revolution – it’s still true that the overwhelming majority of French people were rural and agrarian laborers, which we can broadly classify as “peasants”. But it’s problematic to judge all common people as “peasants”, particularly with regard to the revolution. That archetypal French mob that propelled the radical movements and bolstered their ranks? They weren’t the rural poor. They were the urban poor.

This urban and suburban poor was known as the sans-culottes – so called, because they wore the humble workman’s trousers in their work, as opposed to the fashionable breeches (or culottes) that well-to-do careermen wore. (It’s similar to the later term blue collar in reference to work clothes – in that case, the denim uniforms of 19th and 20th century industrial workers.)

Gasp! He’s wearing pants!!

It’s the sans-culottes who drove the most radical motions of the French Revolution and were most tied to its mythology. They were still poor; but they were not peasants. And that’s a very important difference.

First, the French Revolution was effectively the modern debut of the urban and suburban working class as an independent political force. Such a class of people was still relatively new, economically speaking, which is partly why the traditional French aristocracy didn’t know how to deal with them. This distinct working class demographic would only grow with the rise of the Industrial Revolution, and their interests would remain an important influence in politics – particularly later European revolutionary politics. (I won’t go into further detail, because that would require me to engage with some controversy and open a can of worms labelled “Marxist Historical Materialism” and I don’t want to go there…)

And second, those rural poor, the actual peasants, were generally ambivalent and eventually hostile to the radical changes of the Revolution.

Why? It seems a little bizarre that the most downtrodden economic group in the French Kingdom fought for the status quo – on behalf of the nobility that presumably repressed them. Nonetheless, French peasants still held traditional political values that put them at odds with the French Revolution. Their greatest grievances against the Kingdom – the vestiges of feudalism, the arbitrary use of corvee labor, etc. – were addressed relatively early in the Revolution. With that aside, they were often personally loyal to their lords. Moreover, they held on firmly to a traditionalist worldview and especially their Catholic beliefs. So later on in the Revolution, when the reformers tried to curtail the economic power of the church and transform the clergy effectively into civil servants managed by the state, this greatly antagonized the devout religious peasantry. They outwardly opposed the revolutionaries’ attempts to dominate the Catholic Church in France and replace it with a new deist state religion. And once the Republic started calling for a general levy of troops, many peasants resisted and fought for the other side – monarchists, reactionaries, counter-revolutionaries – against the nascent Republican French government; notably, the War in the Vendée and the Chouannerie.

But more on that later.

MYTH: The Jacobins were a formal organization that were the perpetrators of the most prominent atrocities of the French Revolution

The Jacobins are generally credited (or blamed) for the radical bent of the Revolution, and for escalating it from constitutional reform to republican insurrection to rule by terror. But while this isn’t entirely incorrect, it’s often missing some necessary clarifications.

First, the Jacobins weren’t really a formally managed political organization, like what we would think of as a political party that sets a political agenda and enforces it through the rank and file. The Jacobins were a political club, with a pretty broad membership of politically active members, ranging from the moderate center of the National Assembly to the far, far left.

The Jacobins were more like a forum for political activity and debate than a coherent political firm. In fact, they received their historical name from their venue, the Jacobin Club – a Dominican convent on Rue Saint-Honore in Paris transformed into a meeting hall for their well-attended public debates. (The Dominican Order of monks was called Les Jacobins in France, because their first convent in France was dedicated to Saint James – Couvent Saint-Jacques – hence why their club was called “Jacobin”. The later turn of the Jacobins against the Catholic Church and clergy makes the origin of their name beautifully ironic.) The Jacobin Club’s lack of structure and its openness to public spectators is largely what allowed previous unknowns to rise through popular support and use the club as a breeding ground for increasingly radical ideas. This is opposed to other political clubs at the time, such as the Patriotic Society of 1789, which had clearer leadership roles manned by established political figures, who could set an agenda and suppress more radical elements.

Second, the membership of the Jacobin Club was so vast and its evolution so profound over its five year existence that the majority of prominent figures – on many sides of the French Revolution – were members of the Jacobins at one time or another. So blaming the Jacobins for escalating the radicalism and perpetrating the prominent atrocities of the Revolution, overlooks the fact that many opponents of the radicals, including some of the prominent victims of those atrocities, were also Jacobins.

Depending on the time frame, Honoré de Mirabeau, Antoine Barnave, Adrien Duport, Jacques Pierre Brissot, Jean-Paul Marat, Jacques Hébert, Camille Desmoulins, Georges Danton, Louis Antoine de Saint-Just, Maximillien Robespierre, Jean-Lambert Tallien and Paul Barras were all Jacobins at some point. That isn’t so much a coalition as it is an entire cross section of the French Revolution over 5 years. And most of the people on this list were killed by other people on this list.

So saying the Jacobins were responsible for the Terror is a little like saying the “Socialists” were responsible for the Great Purges in the Soviet Union. Technically accurate, but keep in mind, they were killing other socialists.

A more helpful reading of the French Revolution would parse these folks out into more concrete, well-defined factions. The most important of these factions organized the overthrow of the monarchy in the August 10th Insurrection and the administration of the First French Republic.

The first and most prominent faction is the Montagnards (literally the “Mountaineers” because they typically had the highest seats in the Jacobin Club), led by Danton, Desmoulins, Hébert, Saint-Just, and Robespierre. And chances are, if you are picturing the portion of the Jacobins which were radical and guillotine happy, you are more specifically picturing the Montagnards. Even so, you still have to contend with the Montagnards eventually murdering their own. Most notably: Jacques Hébert and his following of Hébertists, executed because they were too extreme and violent; and Georges Danton and his Dantonists, executed because they weren’t extreme and violent enough. Still, replacing the broad term Jacobin with the more specific Montagnard will go a long way to better representing the French Revolution during the Reign of Terror.

The second faction which we should specify is the Girondins, which conveniently leads us to…

MYTH: The Girondins were moderate Constitutional Monarchists

Lest you think all the myths I’m covering in this list are merely about slippery narratives, interpretation, technicalities and clarifications, I submit you one which is demonstrably, outright false.

Worse yet, this isn’t a falsehood that comes from lazy shallow popular conceptions of the French Revolution like History of the World Pt. 1 portraying the notoriously impotent Louis XVI as an uncontrollable horndog. No; this myth comes from people who should know better. I keep seeing this pop up in educational material, study guides, text books: the untruth that the Girondins – a high profile faction in the First French Republic – were centrist constitutional monarchists.

This isn’t even that hard to debunk; not only were the Girondins not constitutional monarchists, they were committed anti-monarchists and (small r) republicans before it was cool!

The leaders of the Girondins were among the first to outwardly promote ending the French Monarchy and establishing a French Republic, most notably by publishing a periodical, Le Républicain, in early 1791 – when this was still an extreme stance, before the public goodwill toward King Louis XVI was thoroughly spoiled with the failed Flight to Varennes. The most well-known leader of the Girondins, Jacques-Pierre Brissot, was a committed anti-monarchist and anti-Catholic as early as 1785. During the Champs de Mars Massacre, Brissot was calling for the abdication of the King alongside Danton and Desmoulins, who would eventually lead the Montagnards and become the Girondins’ foremost political enemies. At no point in the later French Republic did the Girondins move to reestablish the French Monarchy. They were not constitutional monarchists; they were steadfast democratic republicans.

So why the hell do these so-called educational materials keep calling them constitutional monarchists?

Strangely, I can kinda see why they get mixed up for constitutional monarchists. Let me explain.

The French Revolution is in the unfortunate position of having (in my opinion) at least six distinct cycles/chapters of conflict and factions between 1788 and 1794, while the people who try to retell it broadly assume that the public can only digest about two or three. At the same time, there’s a frankly frightening number of factions to keep track of. (Parlements, loyalists, Monarchiens, Bretons, Society of 1789, Jacobins, Cordeliers, Feuillants, Girondins, Montagnards, Maraisards, Federalists, Indulgents, Enragés, Hebértists, Dantonists, Robespierrists, Émigrés, Chouans, Vendeans, Thermidorians – and that’s just from 1788-1794!!)

So, imagine for the sake of argument that you tried to write a three act screenplay for the French Revolution and you realized that the first draft was three and a half hours long. That’s unacceptable, so you have to cut out some less important figures, reduce the number of subplots, and merge some of the details of existing ones, in each case appealing to what the audience knows (or thinks they know) about the French Revolution as a shorthand.

As it stands, in the mind of the general public, the “Jacobins” – a loose motley group which generally sought to escalate the revolution toward a republic – is equated with the Montagnards – a subsection of the Jacobins who resorted to widespread political violence during the Reign of Terror. By extension, it would make narrative sense to merge the main opponent of the Jacobins – the moderate constitutional monarchists represented mainly by the Society of 1789 and the Feuillants Club – with the main opponent of the Montagnards – namely, the Girondins.

In short, if Jacobins equals Montagnards, and Jacobins oppose Constitutional Monarchists, and Montagnards oppose Girondins, then the reasoning follows that Girondins equals Constitutional Monarchists – or, more accurately, saying so is convenient for a lazy streamlining of the course of the French Revolution. This is how a group which clearly wasn’t centrist or constitutional monarchist gets mistaken for centrist constitutional monarchists.

But the question remains: if the Montagnards and Girondins both espoused radical democratic republican ideals, why then were they so hostile to one another, to the point that the former would begin the Terror to eliminate the latter?

To explain that, we need to introduce a new ignored fact:

Ignored Fact #2: Political Factions and Conflicts, including “Left” and “Right”, are not static but evolving and relative

One of the most important takeaways of the French Revolution is that in a time of political upheaval, a person could take a political position which would paint them as a vociferous radical, and then an upstart pushing the needle of the mainstream toward revolution, then an ideologue of the status quo, then the political right wing, then an agent of counter-revolution, without budging an inch on their opinions. And no one represents this better than the Girondins.

Now, the simple answer to why the Montagnards instigated the Terror to kill the Girondins is that the Girondins were to the “political right” of the Montagnards, but that carries a big asterisk. The Girondins were only to the political right in the context of a radical-dominated republican National Convention. They took a bunch of positions which were contrary to the Montagnards, and the factionalism of the late revolution ensured that they could never cooperate despite their apparent commonalities.

(By the way, the seating order of the National Convention, with the Montagnards on the Left and the Griondins on the Right, is literally where we get the political terms “left” and “right” from.)

First, the Girondins were politically radical, but socially they were more classically liberal. Their economic policies were focused on removing barriers to trade and markets, and were more favorable to the well-to-do bourgeois that made up France’s upper-middle class than the impoverished sans-culotte. They weren’t fighting for the kind of social revolution that the Montagnards were promoting. (This disagreement was not helped when a devout Girondin, Charlotte Corday, decided to assassinate Jean-Paul Marat, a popular but unhinged and bloodthirsty journalist who championed the sans-culotte and pushed them toward violence.)

Second, the Girondins were the original ‘war party’. Brissot publicly argued that fighting an Austrian war would spread revolutionary ideals and ease France’s economic woes, and oddly enough he was supported by some monarchists. (Brissot may have also seen it as a way to unite the country against a common enemy that symbolized counter-revolution.) So when France got into war with Austria, and the war went very badly for France at the start, the Girondins largely took the blame – although it unwittingly helped bring about their goal of ending the Monarchy.

Third, the Girondins were more pragmatic on their treatment of the monarchy and nobility; during the deposed king’s public trial, they largely voted against the execution of Louis (formerly) XVI, on the grounds that keeping the king alive might make a useful asset for the war effort.

Finally, by the late revolution, so much of the factionalism devolved into bitter personal feuds, and frankly the Girondins and the Montagnards couldn’t stand each other by the time of the National Convention, and they kept trying to get in each other’s way out of spite.

Oh, and also while we’re at it…

Ignored Fact #3: France is in the middle of a massive, existential, international war

I keep noticing a tendency to separate the domestic violence and terror characteristic of the late revolution from the continental wars that we associate more with the later Napoleonic era. On the contrary, the War of the First Coalition (as historians call the first war between revolutionary France and the Austrian-led anti-French coalition) is really pivotal for understanding the radical and violent turn of the revolution, beginning 1792.

On April 20, 1792, France declared war on Austria, their rival on the continent and the symbol of all that was reactionary and feudal in Europe. And very quickly, especially when Prussia, Great Britain, Spain, the Dutch Republic and several others decided to join the dog pile against France, it became clear that this war was a fight for the survival of the revolution and for France’s very existence (at least as a major European power). Everything that happened after this can only be understood in the context of France fighting a massive war with terrifyingly high stakes.

Georges Danton didn’t just overthrow King Louis XVI in the August 10 Insurrection because of ideological opposition. Danton did it because the king kept vetoing the country’s war policies and showing sympathies with the Austrians (y’know, his wife’s country), and the anti-French coalition openly stated that their goal was to restore the full power of the Monarchy and to “protect the Royal family”. They threatened that if the king was harmed they would burn Paris to the ground. This plausibly made King Louis XVI not just a barrier to revolution, but on par with a military enemy of France.

The Montagnards didn’t just resort to guillotine justice out of manic bloodlust; they resorted to it because factionalism and dissent was disrupting the war effort, and France might be crushed without authoritative leadership and conviction to mobilize the entire country for war.

Let’s also add on top of it all the real (if exaggerated) possibility that French émigrés were disrupting the war effort from within in the hopes of reestablishing the monarchy. Suddenly lines blur as to whether that schismatic political opponent was acting out of a sincere ideological conviction on the course of revolution, or whether that opponent is an agent of counter-revolution. After all, Montagnard and Girondin alike discovered that a one-time hero of the Revolution, Honoré de Mirabeau, acted as an informant to the King for payment before his untimely demise; couldn’t something similar be happening for each other?

Convenient excuse? Probably. But such an excuse for execution was only possible in the context of a critical war.

So it was that a war committee known as the Committee of Public Safety, originally founded to protect against foreign attack and internal rebellion, ultimately stepped into organized extermination by rounding up the Girondins and resorted to using the guillotines on a massive scale. All to take control of the war effort that the Girondins had started.

MYTH: The main victims of the Reign of Terror were the French nobility and the rich

In most depiction of the French Revolution, people focus on the execution of the nobility and wealthy – e.g. the Evrémondes and Charles Darnay-ish folks à la A Tale of Two Cities. This makes narrative sense, especially if you hold the simplistic notion that the French Revolution is mainly about the poor overthrowing the rich. But again, at best this requires clarification; that only holds true for the more high profile victims of the Reign of Terror: Louis XVI, Marie Antoinette, Louis Philippe d’Orléans, Antoine Lavoisier, Madame du Barry, Jean-Jacques Duval d’Eprémesnil, and so on. But this was a relatively small portion of the people who were actually exterminated.

For starters, many of the royalist nobles and aristocrats who would have otherwise been targeted by the Revolution joined the Émigré movement and fled the country for calmer settlements. Moreover, the nobility – from the petty to the royal – still made up a tiny portion of the population. And the targets of the Committee of Public Safety were so many that the nobility were always be outnumbered.

So who was targeted by the Committee? Truth be told, who wasn’t targeted by the Committee of Public Safety? From the perspective of Robespierre and Saint-Just, threats to the Revolution and the war effort could come from anywhere and everywhere. Girondins who opposed the Montagnards; Dantonists who thought the Montagnards had gone too far; Hébertists who thought the Montagnards hadn’t gone far enough; clergymen who resisted the Civil Constitution of the Clergy; cultists who promoted state atheism; previous political opponents who were already in jail, including many high-profile Feuillants; foreigners of questionable loyalty; Montagnards who ended up being corrupt; people who were accused by their neighbors of counter-revolution; eventually, people who just looked at Robespierre funny.

You might think this was a gradual intensification of violence, but it was pretty broad and haphazard right out of the gate. A month after the overthrow of Louis XVI in the August 10 Insurrection, Jean-Paul Marat went on a paranoid rant saying the political prisoners in the jails might be released to join the counter-revolution if they weren’t killed. So a mob of sans-culottes broke into the prisons to kill the Louis-aligned politicians and the nonjuring priests. In short order, identifying the “correct” prisoners to slaughter proved too difficult, so instead the mob just started killing everybody. In the September Massacres, as they were called, 72% of the prisoners killed were common petty criminals and indigents – and the dead included women and children.

But in terms of raw numbers, the most deaths in the Reign of Terror were (drum roll please) the peasants!

Yes, the peasants who were not the propulsion of the revolution as we all hopefully learned, but an adversarial constituency who became the manpower of reactionary counter-revolution around this time.

The peasants, who were alienated from the Salon discourse in Paris and personally attached to their traditional way of life, learned the following in quick succession: a) their king by divine right was overthrown and executed; b) the government in Paris were committed to dechristianizing the French state and suppressing the Catholic Church; c) the lands of the church and the nobles with whom the peasants had a relationship with were purchased by a bunch of bourgeois opportunists; d) soldiers of the French Republic were now marching into their territory, appropriating food and resources for the war effort, and confiscating the crosses used as gravemarkers (classy); e) oh, and also the government in Paris has published a levy stating that it wants you the peasant to become one of those soldiers, without your consent!

Well. Perhaps we shouldn’t be surprised that some of those peasants rose up and turned over to the reactionary Catholic and Royal Army.

All the executions that were happening in Paris paled in comparison to the citizens who were killed in Western France, particularly the War in the Vendee and the Chouannerie – most of whom were impoverished peasants, many of them civilians.

And while the spectre of the guillotine is often invoked as the symbol of the dreaded Terror, I argue that the most heinous atrocities were committed by frenzied republican soldiers on civilians miles away from Paris. The Infernal Columns, as they were known, committed blanket executions on tens of thousands in the countryside; all told, the dead in the counter-revolutionary rebellions likely exceeded 100,000. Perhaps the most infamous incident is the Drownings at Nantes, when Jean-Baptiste Carrier ordered anyone suspected of Royalist or Catholic sympathies (basically, anyone) to be cast into the Loire to drown. Upwards of 4,000 French citizens – including women and children – were killed in this act of barbarism. It was so vile and inhumane, in fact, that Carrier was eventually charged and executed for war crimes by the Thermidorians – many of whom were former Montagnards.

So if you thought the only people who suffered the violence of the French Revolution were the snooty aristocrats who had it coming, let that notion be disabused: rich and power, left and right were all victims of the Terror’s wrath.

MYTH: Napoleon Bonaparte, a military dictator, was the antithesis of the goals of the Revolution and his rise proved that it was a failure

I’d really love to talk more about the rise of Napoleon Bonaparte and his role in the revolution’s legacy, but I have already gone into too much detail, and I want to keep it focused on what Napoleon meant for the revolutionary cause.

Many people relish the irony that the French Revolution overthrew a king and replaced him with an emperor, and point to it as evidence of the corrupting nature of power. So clearly, the French Revolution must have been a failure that backfired on its instigators, right? Right?

Right?

You’re really not helping your case with that outfit, Napoleon.

Well, not entirely. The true answer is much more complicated.

Napoleon Bonaparte was only the antithesis of the Revolution’s goals if you believe that the main goals of the revolution were democratization and political representation. If that were true, then the revolution certainly backfired; the revolution ended up with a military dictator, which had more authority and power than even Louis XVI.

But take a look at those Declaration of the Rights of Man and Citizen more closely; do you notice any rights to vote? Not exactly – it does declare a right for the public to “express the public will” through “representation” but this is kept deliberately vague.

Even if it had, it’s clear that the French Revolution meant different things to different revolutionaries at different times. And if you focus on some common threads – modernizing the political system of France, removing the privileges of the nobles, ensuring equality before the law, advancing public servants according to merit, centralizing state power, creating a more unified national identity for France, spreading Enlightenment ideals beyond French borders – Napoleon might have sorta low-key embodied the French Revolution…?

Hear me out.

For starters, the idea that a Corsican without French noble blood could ascend the military ranks on the basis of merit and ultimately become the most powerful man in France was a revolutionary achievement in itself in a way. His rise to power would have been impossible without the French Revolution.

On top of that, the Wars of the Coalition (including the Napoleonic Wars) were extraordinarily effective in modernizing French political administration and making its rationalized and streamlined bureaucracy the envy of many nations. This included military reforms, the promotion of the metric system, a Central Bank, and the foundations of a nationwide basic education system. And while the circumstances weren’t always favorable to the countries in question, Napoleon did help spread the ideals of the Revolution around Europe. He did so mainly by conquering them and putting them under puppet states, but still.

Most importantly, Napoleon was personally responsible for one of the most important and influential achievements of the Revolution, though it is often overlooked: the development of a fair, equal and universal law code for citizens regardless of class.

The Napoleonic Code, as it was known, overhauled the arcane and contradictory law codes of the ancien régime and replaced them with a comprehensive rewrite, forming a coherent, well-order system of laws which were clearer and more accessible to the public. This new code not only embodied the legalistic goals of the Revolution; it would become the most influential and widely used modern law code in world history.

This is in part because Napoleon imposed the law code on countries he invaded, so most of Europe got stuck with it – including ones that ran large colonial empires. (Even so, it’s something of an endorsement that those countries decided to keep the Napoleonic Code after Napoleon was Saint-Helena’d.)

But the law code was also influential because reformers in other countries under similarly muddled arcane legal systems opted to just borrow Napoleon’s Law Code rather than reinvent it – regardless of what they thought of Napoleon. Hey, if it ain’t broke, copy it and break for lunch.

One final point: the ascendance of Napoleon Bonaparte proved in a big way that a revolutionary country could also be a powerful country, and its political upheaval might not be a death sentence but the beginning of a new and glorious chapter. Whether or not you approve of what Napoleon did, you can’t deny that he helped France accomplish some pretty spectacular military feats, and he brought France to the brink of conquering Europe shortly after one of the darkest and most perilous chapters of its history.

It is easy to forget just how close France came to being wiped out in the First War of the Coalition. It was surrounded on all sides of its borders by some of the greatest military powers of Europe, all while fighting an internal insurrection against itself. If the Royalists inside France couldn’t snuff out the spread of the French Revolution, then the combined forces of the Holy Roman Empire, Prussia, Spain and the United Kingdom sure could have nipped the revolution in the bud by bringing France to its knees.

Through Napoleon’s dictatorship, the legacy of the French Revolution not only survived the wars, but become the basis of a military power that nearly conquered Europe and brought it under a single empire. As a result, the revolution became unavoidable for the rest of Europe, and the world. Everyone had to contend with the legacy of the French Revolution and what it meant, and everyone still has to contend with it today. And that is at least in part thanks to that megalomaniac in the bicorne hat.

MYTH: Les Misérables was about the French Revolution

Fun fact: just because a musical takes place in France and focuses on a revolt does not mean it is about the French Revolution! And yes, I’m focused on fans of the musical because the book is a lot clearer about when it takes place. (Victor Hugo did not skimp on details in his nearly 1,500 page book.)

The French Revolution (the big one) may have ended around 1799, depending on when you want to pick its end date, but that did not end revolutionary fervor. In the 19th century, there were three fullblown revolutions in France that obtained some degree of success, and several more little skirmishes that did not. And Les Misérables focuses on one of those skirmishes.

If you watch closely, you may notice some evidence that Les Mis is not about the French Revolution based on the subtle detail that the Revolutionaries don’t, y’know, win – and it seems to take place during a few days, rather than several long busy years.

I’m mean… they lost. Did you miss that they lost? Wasn’t that the whole point of the show??

No; in fact Les Misérables focuses on the much smaller and more wimpy June Rebellion of 1832, when secret society republicans decided they were resentful that the previous revolution they just fought (the successful-ish July Revolution in 1830) ended up replacing a reactionary rightwing monarch with a different, more liberal bourgeois-friendly monarch. So they picked a date to throw up barricades, in the hopes of baiting the public into a fullblown revolt – and it failed spectacularly. (At least, until it was adapted for commercial art. That was quite successful.)

The worst part is that this misconception only confirms that public knowledge of the French Revolution is so shallow that any period piece employing the rhetoric of “freedom” and “the people” and complaining about the “starving poor” in a French accent can be mistaken for the French Revolution.

Guys, the French Revolution was a big, long, drawn out, messy conflict lasting at least a decade. It’s also one of the most transformative events of European history. If you think the brief tussle you saw in Les Misérables is all there was, we have a problem.

While this list may have cleared up some confusions, I don’t think I’ve done enough. I think I need to change approach, and use a different medium that can make the French Revolution more engaging and relatable. Something that people can grab onto for information, while making it interesting and not just homework. Something that helps the characters and their conflicts stick in the head, like some kind of… earworm factoid. I wonder what I could do…

Hey! How about a musical?

(Worked for Hamilton, didn’t it?)


Always Revolting,

Connor Raikes, a.k.a. Raikespeare


P.S. When I use the term the bourgeois, I mean it in the strict historical sense: i.e. a legally defined class from the late medieval ages to the revolutionary period who were based in the cities and had political rights based on their property ownership (related to the German burgher), generally associated with upwardly mobile upper-middle class gentry. This is distinct from, but perhaps related to, the Marxist terminology used to describe the capital-owning class whose economic interests are contrary to the proletariat. I will not discuss the connection of the historical French bourgeoisie to the bourgeoisie of Marxist economic theory, or the tangibility the latter, because that would require me to open a can of worms labelled “Marxist dialectical materialism”, and I don’t want to go there…

Pedantic Service Announcement: Saying “Cheese” in Group Photos

This is a Pedantic Service Announcement. These will be made periodically in service of the sticklers who choose to correct common misconceptions at the cost of human interaction. I am here for you. 

I want to talk to you about something very unimportant.

So, it occurs to me that the purpose behind instructing subjects to say “cheese” when taking a photograph often goes over people’s heads. And occasionally, some people try to change it in an attempt to be clever, and end up ruining the whole exercise.

This post was inspired by an anonymous hiker who offered to take a photo of our family, and instructed us to say “Cheeseburger” – which (as I’d like to to illustrate) is a terrible word choice. (By the way, thank you for the photo, hope you had a nice hike.)

So I want to make it clear: the choice of the word “cheese” is not random; it’s intentional, and it serves a very specific linguistic purpose. Which is not to say it’s irreplaceable; you can change it and parody it and be clever with it. But if you want to do so, you had better understand what it’s trying to do, and why it works so well in a way that “cheeseburger” definitely does not.

Say “Cheese”

First of all: instructing the subject to say a word like “cheese” is most often employed in group photos, because it spurs a large group to attention and keeps their behavior aligned with the photographer. But why say “cheese”?

The basic answer (as many people will tell you) is that saying “cheese” more or less inspires you to smile. And assuming you want your group to smile on cue, saying “cheese” helps them do it.

But that’s only a basic overview answer; let’s unpack that a little bit.

“Cheese” is a common, simple, monosyllabic word, and it’s one that young children in particular learn relatively early on. That maximizes the likelihood that people will recognize it quickly, and that they won’t fall out of sync with each other when saying a longer word. But more important is the specific phonetic sound it makes. So let’s do a tedious analysis.

In common British and American English, saying “cheese” (tʃ’i:z) requires you to make three sounds: a “ch” consonant sound (i.e. a voiceless postalveolar affricate), a “long e” vowel sound (i.e. , a high front unrounded vowel) and a “z” consonant sound (i.e. a voiced alveolar fricative).

The “ch” sound (represented as “tʃ” in the International Phonetic Alphabet) is the voiceless postalveolar affricative – alternatively the voiceless palato-alveolar sibilant affricate, or voiceless domed postalveolar sibilant affricate – is the most complex sound, but it’s also the least important. So let’s say little more about it, except that it’s there and it doesn’t interfere with the rest of the sounds you have to make.

Next and most important is the “ee” sound, commonly known in English as the “long e” sound, the high front or close front unrounded vowel, represented with “i” in the International Phonetic Alphabet.

(Side Note: In most Latin-influenced languages, the sound is represented with an “i”, such as the ‘i’ in Spanish “amigo”. English, on the other hand, underwent a mysterious linguistic transformation from 1350 to 1600 known as the “Great Vowel Shift”, in which it diverged from continental conventions of pronunciation. One consequence of this was that English adopted two primary vowel sounds that are represented by the letter ‘e’: the “long e” (the ‘ee’ sound in “beet”) and the “short e” (the ‘e’ sound in “bet”) – so called because the former sounds are usually sustained longer than the latter. Hence the terminology and notation.)

Why is this sound important? Well, it’s actually hidden in its formal name, if you translate it a little bit.

It is an “unrounded vowel”, which means that you don’t round your lips in order to make the sound – as you would when saying the letter “o”. Instead, your lips are generally kept flat.

It is a “front vowel”, which means your tongue is raised close to the front of your mouth when making the sound (but not obstructing the passage). This is contrasted with a “back vowel”, like the “oo” sound in “boo”, in which you hold your tongue away from the front of the mouth.

And finally, it is a “high vowel” or “close vowel”, which means that your tongue is held high, close to the top of the mouth, mostly closing the passage (again, still not obstructing the passage). This is contrasted with an “ah” sound, an “open” or “low” vowel, which keeps the tongue low, close to the bottom of the mouth, keeping the passage as open as possible.

In sum: when making the “ee” sound, your lips are unrounded, and your tongue is covering a lot of space, hugging both the front and the top of the mouth. And when your facial muscles maneuver to make this shape, a notable side effect occurs: the sides of your lips flex and naturally widen to stretch the tongue accordingly.

And this is the key: widened, flexed lips are necessary (if not sufficient) for a smile. And unless people are actively trying to signal a different emotion, they will make a smile-like facial expression without thinking about it when saying an “ee” sound, as a consequence of trying to make the sound. It is the one phonetic feature of “cheese” which is critical for any instruction word for a photographer: a finish on a high front unrounded vowel – which is to say, an “ee” sound.

Finally, the “z” sound is a voiced alveolar fricative. A fricative is a consonant sound produced by forcing air through a narrow pathway formed by two oral bodily parts. For example, a labiodental fricative, such as “v” or “f”, is created by forcing air between your upper teeth (dentals), and your lower lip (labials). A “z” sound in particular is a subcategory of fricatives called “sibilants”, which are higher in pitch and bring the tongue close to the teeth.

The “z” sound is one of the alveolar fricatives/sibilants, where the fricative noise is caused by forcing air between the tongue and the alveolar ridge – the ridge of the mouth created by the dental alveoli or “jaw sockets” that hold incisor and canine teeth. The alveolar fricatives “z” and “s” are also known as “hissing sibilants” because they are associated with “hiss”-like sounds, as opposed to postalveolar fricatives/sibilants or “hushing sibilants” such as “sh” or a “zh”. (Think of the “si” in “fusion”, or the “Zs” in “Zsa Zsa Gabor” for an example of the latter”.)

Now, the “voiced” vs. “voiceless” distinction is determined by whether your vocal chords are vibrating when making the sound; it’s the difference a “d” (a voiced consonant) and a “t” (a voiceless consonant); a “b” (a voiced consonant) and a “p” (a voiceless consonant). The vibration of the vocal chords has the effect of making the consonant warmer and softer than its voiceless counterpart, which are typically sharper and more piercing. The “z” sound is voiced, and its voiceless counterpart is “s”. (Note that when sustaining the “z” sound, people often drop the ‘voicing’ after the initial sound and sustain a broader unvoiced “hiss”.)

While not as important as the “ee” sound, the “z” sound still serves a useful purpose. As an alveolar sibilant, it requires your tongue on the bottom of your mouth to cover the inside of your alveolar ridge on the top of the mouth to make that fricative sound. This leads people to close their teeth while keeping their lips open to let the fricative sound out. But so long as the lips are open, they can form most any desired shape, because they aren’t required to create the sound. In short, the “z” sound closes their teeth and keeps their lips open. And assuming they keep the lip shape formed from making the “ee” sound, what’s the typical result? A nice, toothy smile. (Not everybody gets to the “z” sound, but simply preparing for it produces a similar effect.)

Finally, in addition to its linguistic function, let’s also credit the fact that “cheese” is an agreeable, lightly amusing word. Maybe this is subjective, but I would bet most people find the thought of cheese pleasant and easy to smile about anyway, which makes it all the merrier to say as far as group photos are concerned.

Bad “Burger”

Now, for posterity lets compare this to the sound made by saying “cheeseburger” – specifically, the “ur/er” sounds at the end. In American English these are rhotic consonants, a wildly diverse class of consonant sounds. (This is not the case with the standard British dialect of English – known as “Received Pronunciation” – and related dialects, which prefer to drop “r” sounds in favor of a preceding vowel sound, unless they are proceeded by a vowel sound; these are known as “non-rhotic” dialects.)

Rhotic consonants are “liquid consonants” which are more flowing and behave more like vowels than other consonant sounds. They’re also “slippery”, and can be pronounced in a number of ways – but generally in American English, it is pronounced as a labialized postalveolar approximant or labialized retroflex approximant.

An approximant is kinda a half-fricative; it is a class of consonant sounds where the oral bodily parts approach one another without creating the turbulent airflow you’d get with a fricative, and postalveolar and retroflex both mean the sound is produced by placing the tongue toward the back of the alveolar ridge.

This is already counter to the simple smile motion that you presumably want to make, but the bigger problem is that the “r” sound is almost always labialized in American English. Labialization is a secondary process for pronunciation that involves rounding the lips to adjust the basic consonant sound. (It is the compliment to “roundedness” in vowel sounds.)

In short, when producing the sounds in “burger”, people won’t widen their lips; they will compress and scrunch their lips, producing an unappealing facial expression somewhere between a pucker and a befuddled look.

If the point of the exercise is to make people smile, the photo subjects will have to actively fight against this word in order to do so. They will have to say “cheeseburger”, stop, and then smile – which is made more difficult by the fact that “cheeseburger” is three syllables long. Instructing them to say the word is practically worse than saying nothing; it is certainly worse than a countdown. It is a barrier placed in between the subjects and the goal of smiling for the camera!

Guidelines for Parody

Now does this mean that saying “cheese” can’t be changed or parodied? Of course not! Just keep in mind what linguistic function the word “cheese” serves and how to replicate the effect while oh so cleverly using a different word for instruction for the sardonic anarchic postmodern lulz.

The easiest method is to find a word is a rhyme-or-slant rhyme with “cheese”: depending on the crowd, “sneeze”, “please”, “Chinese”, “striptease”, “disease” and “booties” could all work well.

But it’s not strictly necessary to find a word that rhymes with “cheese”. Most words are suitable so long as 1) its last vowel is a high front unrounded vowel sound (i.e. an “ee” sound) and 2) it doesn’t end on a consonant that changes the shape of the lips (e.g. a “labialized” vowel).

(Also, I’d recommend a shorter word, that’s only one or two syllables long. Any more than three syllables increases the risk photo subjects falling out of sync as they say the word. Better to have a single syllable that they can stretch out.)

So in a pinch, “Mary”, “theory”, “peace”, “beat”, “weed” and, well, “teeth”, could work. But be careful: words like “beer”, “seal”, “leap”, “dweeb” “queef” and “peeve” will not work so well, given how they impact the lips and disrupt a smile.

In lieu of an overlong explanation, I advise you to try it out for yourself; think about the word you’re going to use, mouth it, and be cognizant of how it shapes your face. With a little practice, I suspect you’ll figure it out quickly and determine how to intuit a good word choice from a bad word choice.

Or, alternatively, you could just say “cheese” like a regular person and try to not to outsmart a convention that’s more thoughtful than you may have realized. Or, even better, just do a countdown.

Voiced Labiovelar Approximant Semivowel, Voiceless Alveolar Stop, Voiceless Labiodental Fricative.

Connor Raikes, a.k.a. Raikespeare

P.S. Linguists seem to sneak a bunch of onomatopoeia into their terminology, don’t they? Like, the first letter of “sibilant” is a sibilant, the first letter of “fricative” is a fricative, the first letter of “liquid” is a liquid consonant, the first letter of “voiced” is a voiced consonant. Also, the first letter of “rhotic” is also a rhotic consonant, but then again, rhotic literally means “like the letter Rho”, so that’s sorta self-explanatory…

Ten Rules of Trivia

Tags

, , , , , ,

What is trivia?

Trivia – noun. (Technically plural, but in daily use, more often used as a singular in itself.)

Definition: Knowledge, details, or matters which are deemed nonessential, inconsequential, or unimportant.

Though one might assume that it is cognate with the word “trifle” with which it shares a similar meaning, it is in origin much older, derived from the Latin trivium – “tri-” and “-vium” meaning “three roads”, or “the place where three roads meet”.

The term in common use originated from the classical liberal arts education, the standard education for free Greek and Roman citizens aspiring to be cultured and participate in public life (liberalis: “worthy of free men”), as opposed to the “practical education” of vocation, craft and trade. A classical liberal arts education traditionally had seven disciplines. The trivia (singular: “trivium”) were the three lower disciplines which were considered foundational or primary: grammar, logic and rhetoric. Learning the trivia was necessary to proceed and learn the quadrivia, the four higher disciplines of the liberal arts: arithmetic, geometry, music and astronomy. Because the trivia were more basic and the quadrivia more advanced and prestigious, trivia gained the connotation of “lesser knowledge” and later evolved into the meaning of “useless knowledge” – even though the trivia were by virtue essential – the fundamentals of the liberal arts.

This was not the only usage of the term trivia: it was also used in reference to a crossroads, as a joining point of three roads. And in Western folklore, the crossroads was often held as a symbolic threshold between the realm of the natural and the supernatural; the intersection of material and magical; a meeting place of world and underworld. Its implications were of mischief, mysticism, and sorcery; even as late as modern times, the crossroads was recognized as a place to summon demonic powers, and make a deal with the devil – as seen in 1926 film adaptation of Faust, and numerous 20th century American blue songs about men selling their souls.

Hekate, inspiration for Diana Trivia

In Greek mythology, Hekate was a triple-faced or triple-bodied goddess of the three way crossroads – and, accordingly, of magic, witchcraft, and necromancy. The later Roman equivalency lent these traits of Hekate to the Roman goddess Diana – even though Diana was equated first and foremost with Artemis, Greek goddess of nature and the hunt. Diana also took on the traditional associations with Luna, Roman goddess of the moon – and her Greek equivalent, Selene. In effect, Diana was herself triple deity, merged from three influences and taking different forms; and when her worshipers portrayed in the form of the bewitching goddess of the crossroads, she was given the epithet: Diana Trivia, “Diana of the three paths”. And it is her associations with magic and nature that still give Diana Trivia symbolic import in neopagan religions and subcultures to this day.

“Triviality” is the trait in mathematics of solutions which are null, or otherwise so simple in structure that they are considered unrevealing. “Trivial”, rendered from the sense of “lesser” to mean “commonplace”, referred to the common names of chemical compounds which don’t obey the systematic rules of scientific nomenclature; and in a similar way, “Trivia” meaning “common”, became the name of a genus of sea snails and marine gastropods, which resemble cowries, even though they aren’t closely related. “Trivia”, used in both the sense of crossroads and the commonplace, was the title of a 1716 satirical poem by English dramatist John Gay, about walking the streets of 18th century London.

In the early 20th century, trivia meaning “bits of information of little consequence” gained popular usage through American-born British aphorist Logan Pearsall Smith, in his works Trivia and More Trivia, which were collections of essays tied to silly observations of public life.

And so it was, in the 1960’s, that American college students first applied the term trivia to quizzes for the sake of entertainment. The first mention of “trivia” as a parlor game of trading knowledge questions is attested to an article in the Columbia Daily Spectator in 1965 by Ed Goodold, who then collaborated with his colleague Dan Carlinsky to organize a “trivia contest”. The following year, they published a quiz book that achieved ranking on the New York Times best seller list, entitled “Trivia”. This introduced the usage of trivia as a “quiz of general knowledge” to the wider public, a connotation which became set in stone with the 1981 release of the popular board game Trivial Pursuit.

Trivia is many-faced word, like the goddess that bore its epithet: it variously means fundamental and unnecessary; commonplace and mythical; natural and supernatural; it is too simple and too complicated; it is unimportant, and yet, to me, it is so, so meaningful. All of this, and more, is trivia; and you may take that as you will.


I love trivia; I’ve loved it for as long as I remember and I will love it till the day I die. And even then, I haven’t ruled out putting a neat factoid on my tombstone.

For those of you who know of the songs I wrote where I named every country and capital of the world, this should come as no surprise. What can I say? It’s the perfect hobby for people who are both intellectually curious and self-competitive.

And I’m not alone. I attend a pub quiz every Tuesday (7pm at The Lookout in Seattle), where I am surrounded by people who have the same unbridled joy for trivia as I do. I largely go there because it just so happens the quizmaster there clearly cares about trivia himself; he clearly takes care in crafting and selecting questions. For this quizmaster, it’s personal, and it’s focused on what makes trivia so engaging. And that’s why it’s high quality, in my opinion.

But not all pub quizzes are so lucky.

I have been to pubs that seemingly view trivia as little more than a go-to solution to increase bar attendance on a quiet weekday evening. Some borrow a humdrum questionnaire from the first website they visit and instruct the bartender to read it out loud as apathetically as possible. Others shrug and hire the slick mass-produced trivia experience offered most prominently by Geeks who Drink. In either case, they see trivia as a means to an end, and therefore they miss the point.

I have trouble explaining this to people who don’t recognize it already, but good trivia is more than just getting drunk and taking a school test. Good trivia doesn’t have to pardon itself and make itself more palatable with extraneous elements that add entertainment. Good trivia is enjoyable in and of itself; it can be inspiring, tense and cathartic for the player.

If you want proof that trivia itself is naturally engaging, consider that most of the earliest broadcast game shows (on radio) were trivia competitions. Jeopardy! has been on the air since 1964 (with short intermittent periods without broadcasting) and its admirably austere format has remained virtually unadulterated that whole time. The quiz show competitors added better production values, higher stakes, larger personalities and the latest gimmicks, and Jeopardy! outlasted them all with just little more than an understated host, three contestants, and a lot of trivia. (It doesn’t even have a studio audience!) Its pure simplicity was the key to its longevity, not an impediment!

Usually, to get that enthralling experience, the player has to have correct mindset and personal investment (i.e. I can’t convince you to like trivia if you’ve decided you don’t care), but it’s the quizmaster, above all, who has the biggest impact in setting the experience and making it enjoyable. And if the quizmaster is deficient in the mindset and personal investment, then even the second coming of Ken Jennings won’t have a good time. (Hell, such a player might have the worst time of all!)

And to my mind, being a good quizmaster isn’t hard; it just requires effort and care. So I have taken my personal experience as a player about what makes trivia good or not good, and distilled them into ten simple rules to making good trivia. Let these commandments be your law and guide if you find yourself in the unusual position of leading a pub quiz forced to make all of your own questions up.

I’ll start out listing them all out, and then follow it up with some commentary. Here they are, straight from my personal Mount Sinai.

Raikespeare’s Ten Commandments of Good Trivia

  1. Simple Answers: The correct answer should be simple and straightforward (ideally it should seem obvious in retrospect)
  2. Engaging Questions: A question should be engaging (interesting, dynamic), particularly to someone who doesn’t immediately know the answer
  3. Broad Knowledge: The selection of questions should reward the prudent use of a broad base of knowledge
  4. Non-Ambiguity: The correct answer should be unambiguous (to the best of the quizmaster’s ability)
  5. Challenge: Questions should add an element of challenge, and shouldn’t be too easy for a quiztaker relying on basic knowledge
  6. Non-Pedantry: Quizzes shouldn’t be pedantic; they should not request information that is overly detailed, or require highly specialized, inaccessible knowledge
  7. Multiplicity: Questions should present enough information for a quiztaker to arrive at the right answer from multiple avenues/lines of thought
  8. Social Play: If playing with teams, Questions should encourage the discussion and sharing of information among teammembers to build consensus and identify the right answer
  9. Themes: Thematic categories are useful, so long as they approach questions from many angles and their application doesn’t conflict with the other rules (e.g. Broad Knowledge; Non-Pedantry; Multiplicity; Social Play)
  10. Fun: Quiztakers are here to have fun. Trivia should be fun in itself.
Fun fact: The scribbles on Moses’ tablets in the 1956 film “Ten Commandments” actually spell out a shorthand for the commandments in a Paleo-Hebrew script. Nice touch.

Commentary

Rule #1: Simple Answers

I think everyone understands this intuitively, even if they don’t notice it consciously. Let’s analyze Jeopardy!’s most well-known gimmick – one of the few features of the show which I can actually call a gimmick: answering in the form of a question.

I’ll dissect this in more detail later, but for now consider this: since Jeopardy contestants are obliged to answer in the form of a question, what kind of “questions” do the contestants use? What interrogative words (i.e. “question words”) do they use the most?

While I can’t back this up with hard data, I can confidently say that contestants overwhelmingly use “what” as their go-to, with a notable usage of “who” when referring to a person. They rarely (if ever) say “when” or “where”. They don’t ever say “how” or “why”.

Here’s my (admittedly ironic) question: why? Why don’t Jeopardy contestants say, “why”?

If you ask me, it’s because of this unstated rule at the heart of trivia: the correct answer should be simple and straightforward.

“What” and “who” are both interrogative pronouns in this context. They both imply discrete, narrow subject matter; the answer they provide in each case is restricted to a single noun, proper noun or phrase: “What is redwood cedar,” “Who is Napoleon,” “What is The Sun Also Rises,” etc.

By contrast, “how” and “why” are interrogative pro-adverbs; the subject matter that they imply is complex and relational – “how” implies “what method/manner” and “why” implies “what reason/cause”.

“How is x” and “Why is x” are both incomplete, assuming x is a noun/proper noun. (It requires at minimum an adjective, participle or modifier statement to complete it, such as “How is redwood cedar red,” “How is Napoleon doing,” “How is The Sun Also Rises a classic,” etc.) Because of the natural preference for simple, discrete, narrow, concrete solutions, Jeopardy! almost always selects prompts where the correct response begins with “what is”, “what are”, “who is” or “who are”.

Now why do people prefer trivia where the question is framed for simple, discrete answers? Well, some reasons are closely related to other rules (see rule #4 in particular), but broadly speaking, complex answers are weaker, more open-ended, more presumptive, and less falsifiable. Keeping answers simple and straightforward prevents the wishy-washy subjectivity that comes with relational questions.

A good trivia question is like a fully constructed lock that’s opened with a specific key; all that’s left for the quiztaker to do is to identify the correct key. By contrast, a trivia question that disobeys the first rule is like showing the quiztaker an unfinished lock, and asking the quiztaker to imagine and sketch out the rest of the lock and fill it in with a key based on their own assumptions.

Rule #2: Engaging Questions

Remember that I said earlier that trivia is more than just taking a school exam on several servings of alcohol? This rule sets that difference.

Exam questions are not designed to be engaging; they’re designed to determine whether you have the testable knowledge the educators are looking for. That’s why they typically draw a stark line between knowledge and ignorance; often the question gives you the bare information you would need to ideally: “What commander won the Battle of Hastings?” “What is the capital of Hungary?” “Who wrote War and Peace?” You either know it, or you don’t.

That is not how good trivia frames their questions.

A question such as “What is the capital of Hungary” is only as engaging as the effort the quiztaker’s inclined to give to find the correct answer. For people who know that the answer is Budapest, the moment they recall it, they cease to be engaged. For people who don’t know the capital of Hungary, this question is never engaging; it only reminds them of their ignorance. In short, this is a basic question, and one of the least interesting ways to frame it.

Now consider a closely related question, which I would argue is more engaging: “Historically two separate towns divided across the Danube River from each other, they were formally combined in 1873 to form this European capital, now known by the merger of the two towns’ names. What is the city?”

Somebody who knows about Budapest is still engaged with the process of sifting through the information and focusing on the parts that will lead them to Budapest. Throughout that process, there’s a greater chance that a knowledgeable quiztaker would still learn something new (did you know that Buda and Pest were once two separate cities?!) or at least be reminded of a novel fact.

By similar token, someone who doesn’t know the answer is still engaged in the question. It sparks more curiosity and interest, and provides them with more information for a slightly more informed guess. Even if they don’t get it correct, they still end up with a neat factoid as a consolation.

Finally, there’s often a third category of quiztakers who operate in the grey area between knowing and not knowing. Maybe they once knew the capital of Hungary, but their memory is fuzzy. The more dynamic question helps them look at the problem from different angles, so they’re more likely to find a way to the correct answer with some quick thinking and creativity. They can discard any cities that aren’t located near the Danube, for instance, or any city whose name sounds little like two capitals merged together. Even if they mess up the correct answer, the dynamic question gives them a chance to do a post-mortem on their thought process and figure out how they could have identified the correct answer.

And remember: it’s the same question, and it’s still asking for the same straightforward answer. The difference is that the quizmaster embraces the art of asking and framing the question, to make it more interesting for everyone – even those who don’t know the correct answer. That is the mark of good trivia: it makes the journey just as rewarding as the destination.

Word of note – making engaging questions doesn’t always mean adding more information, and at some point bloating the questions with info will make them less engaging. Keep in mind also that there are many ways to make a question engaging, and framing a question around a more expansive and interesting piece of information is just one.

The first and second rules lay out the basic framework of trivia: simple answers, dynamic questions. The questions should be complex enough to be engaging and interesting, but the answers to those questions have to be simple and straightforward. Most of the other rules inform these two in some way.

There is some tension between the first two rules, but it’s what I’d call a constructive tension. Navigating complex questions are all the more intriguing with the knowledge that the answer which resolves it is simple.

Rule #3: Broad Knowledge

Quizmasters should use a variety of questions from a broad range of categories, and reward quiztakers who have a broad base of knowledge and use it judiciously.

There are several reasons for this.

First, broadening the base of knowledge is a simple method to create gameplay balance, reward quiztakers with skill and reduce the risk of a “fluke” performance. Focusing the questions too narrowly on a specific field of knowledge will cause more teams to win on luck and happenstance rather than skill. For example, a team might dominate a round where all the questions are about advanced human anatomy simply because they have a medical doctor as a team member. “Diversifying” the quizzes will prevent such deviations and keep the game more competitive and fair.

Second, in general people who enjoy quizzes typically are naturally curious, and using a broad range of categories are more likely to spark their curious interest. Conversely, a limited range of questions from a narrow base of knowledge will dull quiztakers’ curiosity – even if it’s a category they’re interested in.

Third, applying a broad base of knowledge requires quiztakers to use different kinds of reasoning, and in my opinion, that’s just a good thing to encourage. This may surprise you, but I admire the “renaissance man/woman” ideal. A shock, I know, considering I use a punny nickname based off of Shakespeare. One of the traits I value most is the ability to think with versatility, and I like when quizzes treat it as a virtue. Hopefully this is not an unpopular opinion.

There are more reasons for using a diverse range of questions, but many of them overlap with the other rules below.

Rule #4: Non-Ambiguity

Quizzes should try to obey the input/output principle: all else being equal, for every input there should be one and only one valid output. If you want to infuriate a dedicated trivia buff, make the ‘correct’ answer ambiguous or dubious in its accuracy. Not only does it frustrate quiztakers if they identify multiple answers that seem correct, but it deflates the experience by making success or failure seem arbitrary and subjective to the whims of the quizmaster. Above all, an ambiguous answer indicates neglect on the part of the quizmaster, and undermines the quizmaster’s authority in the eyes of the quiztakers. It often turns the quiz into an impromptu legal dispute rather than a test of skill, when quiztakers refute the correct answer or argue for the validity of their own.

For the record, I’m not talking about two different ways to say the same answer (e.g. “Clark Kent/Superman”) or two different but overlapping closely related answers (e.g. a question where “Freddy Mercury” and “Queen” are both correct answers). I’m talking about a question where there are two or more wholly distinct answers.

Let me use a couple bad example from personal experience to illustrate:

First, at one trivia I went to before The Lookout, the quizmaster asked the trivia question: “What prolific 20th century performer was nicknamed ‘The Duke’?”

Our team quickly wrote down the intended answer: legendary Hollywood actor John Wayne. But when answers were being read out, several teams (including one next to us) argued for a completely different but equally valid answer: legendary jazz composer and band leader Duke Ellington.

Now, despite the fact that we got the intended answer, and it was not in our self-interest for the other teams to get a point, I adamantly argued on their behalf. And it had as much to do with my sympathy for their answer as it was with my indignation at the quizmaster. Because of the shoddy construction of the question, yes, Duke Ellington was a correct answer, and qualified just as well as John Wayne. There are some metagame reasons why you might want to avoid that answer (e.g. it’s a little on the nose when the person is most commonly known by the name “Duke Ellington”), but that’s beside the point. The quizmaster had a responsibility to anticipate this confusion and avoid it with more clarification – and seriously, it would not have been that hard. “Three-time academy award nominee”, “born Marion Robert Morrison (seriously)” – hell, even “prolific Hollywood actor” would have sufficed. But no. Instead they picked “Prolific 20th century performer.)

Second, more recently, there’s a café near my workplace that gives a free cup of coffee to the first patron who answers a trivia question correctly. I went to grab myself a coffee and I saw the trivia written on the whiteboard listed as follows: “What species of cat is the largest in the world?”

It didn’t really matter, since somebody already guessed the “correct” answer, but I figured, I would take a shot.

First, I asked a clarification: “When you say largest, does that mean weight?”

Barista’s reply: “By largest we mean size.”

…Ok. I’m not sure she realized that this did not clarify. But oh well.

I say, “I’d like to take a guess: tiger/Bengal tiger.”

The Barista shook her head. “Sorry, that’s not correct.”

I shrug and say, “What is it?” (Again, somebody already got the coffee so it didn’t matter.

And the barista said: “It is, in fact, the Liger.”

…NOW. I’m sure most café patrons would be interested to learn that there is such a thing as a Liger (a male lion/female tiger crossbreed) and it wasn’t just made up by Napoleon Dynamite. They might also be interested to learn that Ligers exhibit unusual emergent traits separate from their parents – most notably, it grows larger than the adult males of either lions or tigers. This is a very intriguing fact that most people would appreciate.

But this barista unfortunately asked this of a trivia buff, and I had just one tiny problem with that answer:

“That is NOT a species!”

See, the Liger does grow larger than its host parents, but the question said “species” and Liger definitively does not qualify as a species.

See, a “species” isn’t a generic term for a group of namable animals. It has a specific scientific definition: a species is the largest taxonomic classification where in any pair of fertile individuals of appropriate reproductive sex can mate and reliably produce fertile offspring.

Ligers, on the other hand, are not a species, and are disqualified as such on two counts: first, the overwhelming majority of them are born infertile, and cannot reproduce, either with themselves or with their host parents. Second, if Ligers were a species, then their birth parents would also have to be a part of that same species, which lions and tigers definitively are not. Now, lions and tigers are part of the same genus, Panthera, which in this case allows them to produce hybrid offspring – but again, the hybrid offspring are almost always infertile, so they aren’t the same species.

Some of you may be surprised that ligers are generally infertile, considering that they’re bred for their skills in magic.

So technically, Ligers are not the largest cat species; they are a hybrid offspring, and they are indeed the largest extant feline organisms, but they are not the largest species in the world. The largest cat species is, in fact, the tiger – not that I’m bitter or anything.

Admittedly, I dropped the issue long before I went into the above rant on the difference between hybrid offspring and species; I had nothing to gain, and I didn’t want to be meanspirited. But it bugged me, and my trust in their trivia felt betrayed.

Side note: how a quizmaster responds in an ambiguity dispute is just as important as avoiding the ambiguity all together. A common but thoroughly unrewarding response is the unsophisticated “I said so” response, or the variant “that’s what it says on the card”. This is a poor response because asserting hierarchy rather than asserting knowledge is antithetical to good trivia; it reeks of insecurity and it’s usually a sign that the quizmaster’s commitment to trivia is shallow and thoughtless.

A better response is to check to the quiztaker’s answer against the question to see if it is valid. If the answer doesn’t quite fit the prompt, give a justification, but if the answer is valid, give the quiztaker a point with a mea culpa. This creates a relationship of trust and rewards players for thinking outside of the box.

So, yes, in short the quizmaster is responsible for ensuring that the correct answer is as unambiguous as possible. It’s a simple rule, but it’s a lot more complicated to followthrough on than you might think. Usually, making the questions more detailed and providing more ways to find the correct answer (see Rule #2 and Rule #7) solves this problem, but not always.

My favorite ludicrously complicated example of a question that by all appearances should have been unambiguous, but wasn’t, came at my go-to trivia spot, the Lookout.

The quizmaster asked a question along the lines of, “What Polish nobleman served in both the Polish-Lithuanian Commonwealth Army and the US Continental Army, obtaining the rank of Brigadier General, led a revolution in his home country, and is now regarded as a national hero in the United States and Poland?”

The moment I heard that question, I froze in horror. Not because I didn’t know or had forgotten the correct answer, but because by happenstance I could think of exactly two individuals who almost exactly fit that same weirdly specific description: Casimir Pulaski and Tadeusz Kosciuszko!

I kid you not: they were both Polish; they both served in both the Commonwealth Army and US Continental Army; they both obtained the rank of Brigadier General in the US Continental Army; they both led two different revolutions in their home country (Casimir Pulaski led the Bar Confederation Uprising; Tadeusz Kosciuszko led the creatively named “Kosciuszko Uprising”), and they are both regarded as national heroes in the United States and Poland!

Does that technically make them bipolar? Get it? Y’know… cuz they’re two Poles? It’s a… sorry.

That’s absurd.

The main difference was their timing; Casimir Pulaski already served a long career in the Polish-Lithuanian Commonwealth Army as a cavalry commander, and he sought employment with the Continental Army because he was exiled as part of his involvement in the Bar Confederation Uprising. In fact, he died fighting for the American cause in the Battle of Savannah.

Tadeusz Kosckiuszko, by contrast, joined the US Continental Army as a romantic but inexperienced revolutionary galavant, and proved himself a talented military engineer. He then returned to the Polish-Lithuanian Commonwealth, seasoned by his service under George Washington but no less romantic in his revolutionary ideals, and participated in several military actions with the Commonwealth Army – most notably leading the Kosciuszko Uprising against Russian domination in Eastern portion of the Commonwealth.

Scenarios like these are incredibly rare – at least if they’re purely accidental – so I didn’t hold it against the quizmaster. But it does emphasize that making questions with truly unambiguous questions is a lot harder than it looks.

Rule #5: Challenge

Like most competitions, trivia becomes fulfilling by providing challenge. That’s not an open invitation to be punishing, but it is an invitation to press the quiztakers to work hard for the answers.

Many quizmasters think that they can broaden the audience and make it more approachable and enjoyable by making the trivia easier – but this is at best a short-term gain and comes at the cost of a long-term fulfilling experience. Yes, when you first start out playing trivia, you get a temporary high for every question you get right and a temporary low for every question you get wrong. If that enjoyment were static, then yes, easier quizzes would be better. But for people who like me are more dedicated to trivia, the enjoyment comes from overcoming a challenge and by thinking at a high performance level – and that experience does not come from easy questions.

Before I went to the Lookout, I went to a different pub trivia near where some family friends lived. Initially, I joined them in playing trivia, but gradually they stopped coming due to their schedules, and I often showed up by myself.

I still went, because I really wanted to play trivia, so logically I should have just joined up with another team – but I was also very shy, and anxious about joining a random team, so instead of doing the reasonable thing I played trivia by myself – against several teams with up to six people (occasionally more).

But after a few times of playing trivia by myself, I realized a big problem; and no, it’s not the obvious one that showing up repeatedly to do trivia by myself because I’m too nervous to ask a new team is really sad and pathetic. I knew that one straight away.

No; the big problem is, I was doing way too well.

By myself, I usually ended up in the top three teams. And several times, I actually won. By myself.

The first time I won by myself was a moment of excitement and pride. I was really happy about it.

The second time I won by myself, I was mostly just shocked – it was a night I thought I did really poorly, in fact, and I was sorta prepared to leave after I turned my scores in and only stayed out of morbid curiosity. (It turned out everybody had a shitty night!)

The third time, I knew. Before I even turned my scores in, I just knew. I knew I got first prize, and I felt that I owned it.

And that was the moment I decided I needed to find a different trivia spot – one that offered more challenge. Because I am a person who is competitive with myself, and I knew winning that trivia – even by myself – would no longer be fun and rewarding after that third time.

I wanted to improve, to learn and to grow, and that requires challenge. That’s what the quizmaster should want of their audience, too. A pub trivia shouldn’t have to pander to the insecurities of novice quiztakers; it should challenge the toxic fixed mindset that asserts that people are precisely as intelligent as they’ll ever be and won’t get batter. Fuck that nonsense; there are better, more enriching ways to make a quiz enjoyable than giving away cheap correct answers. I want to struggle and strive and feel that I earned my success; and providing that well will earn the quizmaster my loyalty. (It’s pretty rare, I’ve found.)

Rule #6: Non-Pedantry

Let’s disabuse a common misconception about trivia: trivia is not about pedantry. It should never be about pedantry. Pedantry is the antithesis of good trivia.

And I know I write that after I complained at end about how Ligers were not technically a species, but here me out.

If you are a curious person with a solid foundation of knowledge, trivia should be an empowering experience. Pedantry, on the other hand, smothers curiosity and leaves their audience feeling disempowered. Giving quiztakers a question which demands an unreasonable level of detail, or which can only be understood with the type of information that is inaccessible or disengaging to the general public, will annoy and humiliate most of the audience.

Well, actually

I think most people will agree with this conclusion but – again, somewhat ironically – the details get complicated, and ultimately what’s pedantic and not is subjective, and depends on what kind of knowledge the audience values.

For example, I personally don’t like questions which ask me for the exact year a historical happened – and I say that as someone who is actually pretty good at remembering historical dates. It feels arbitrary and unfair unless the question provides information to zero in on the answer, such as a “Presidential election” year. (The worst are multiple choice questions that only reinforce the unreasonable granularity; e.g. for the question, “What year did Napoleon win the Battle of Austerlitz?”, the multiple choices “1804, 1805, 1806, 1807” is a terrible spread of answers, compared to “1794, 1799, 1805, 1812”.) On top of that, it usually takes a hell of an event for the year itself to be distinctly tied to it for its cultural import (e.g. 1066, 1492, 1776, 1789, etc.). In any case, there’s usually a better version of the same question that emphasizes the content rather than the year.

On the other hand, I personally think the correct use of the scientific term ‘species’ is not overly pedantic and is likely to throw people off, as it did me. Then again, the barista might disagree. And maybe that’s okay.

You’ll notice that there is a tension between the principles of rule #5 and rule #6; one condemns vagueness, and the other condemns persnicketiness. Again, I argue it’s a productive tension; striking the right balance will keep the quiz focused on the right details, and make the quiz more engaging.

As a rule of thumb, I suggest focusing on the details you find most significant, and try to recognize any other ways in which the audience can interpret the question. Then, lean on the audience to bring in their own detail attention if they choose to. Who knows? As the quizmaster you might learn something valuable.

The important thing, though, is making question answers accessible to people who have a broad range of knowledge and a requisite amount of curiosity. Even if they don’t get it, you want them to think they might have gotten it, without resorting to tedious cataloguing.

Rule #7: Multiplicity

Bear with me: I’m about to go on a very long tangent…

…Let’s return to the topic Jeopardy!’s most well-known and identifiable gimmick: answering in the form of a question.

Jeopardy! is obviously a venerable institution and a touchstone of virtually all trivia lovers in the United States and many abroad, but I do have a problem with its famous rule: if you think about it, the gimmick of answering in the form of a question is… well, it’s bullshit.

For those who don’t know, the “form of a question” comes from a key part of Jeopardy! history. In the 1950’s, television offerings in the United States were stuffed with panel and game shows, largely because they were relatively cheap to produce. When the US Supreme Court (yes, the literal SCOTUS) ruled that quiz-based shows were not a form of gambling in the case FCC v. ABC (1954), quiz shows emerged in force in the mid-50’s.

However, multiple broadcasters made the format competitive, and prime time shows like The $64,000 Question and Twenty-One fought hard distinguish themselves and attract viewers.

They first hit upon the idea of raising the stakes with high value grand prizes, which was a profound innovation which still influences (i.e. infects) the way reality-based shows are marketed. But once the stakes were raised all around, the shows started looking for a new competitive edge.

Eventually, producers turned to promoting personalities and the drama therein. Viewership grew with recurring contestants that audiences could identify and cheer for. But hold on; trivia is a competition, not a scripted television show. You can’t control who the recurring contestants are; that’s tantamount to controlling who wins and loses, and that would violate the fairness and integrity of the quiz, and its pretenses to reality. And I mean who would script a reality show to exaggerate the drama? That would be unthinkable, right?

Right?

Yeah, so in 1958-59, a bunch of people came forward and confessed that the television quiz shows were completely rigged. Most notoriously, the game show Twenty One engineered a long winning streak for Herb Stempel, a scrappy working class Bronx resident, only to force him to throw a match against Charles Van Doren, a handsome high society Manhattanite academic with a MA in Astrophysics and a Ph.D in English from Columbia University, so that the producers could choreograph an even longer and high profile winning streak for Van Doren. (This was notably dramatized in 1994 film Quiz Show.)

The fallout of the Quiz Show Scandal was swift and harsh. US Congress held hearings and introduced legislation to ban the fixing of game shows, since this was a long ago era when Congress could be moved to action by public outcry – even on something seemingly trivial (pardon the pun). Sponsors dropped quiz shows after public backlash, and most dedicated quiz shows in primetime were off the air by 1960, leaving scripted television and celebrity panel shows like What’s My Line to pick up the slack.

Now the appeal of trivia will always be strong, but by the early 60’s, the brand of quiz shows were tainted, and conventional gimmicks like large prizes only reminded audiences of the corrupted, dishonest shows of the 50’s. To produce a new quiz show, it would need something completely different; a format that the audience had never seen before, which would noticeably distinguish this show from its ignoble predecessors, while still keeping it a quiz show.

And this is when Julann Griffin, wife of Big Band crooner-turned-Game Show Host-turned-aspiring television producer Merv Griffin, struck upon an idea which possibly saved trivia on US television, even though I admittedly think it’s quite silly:

“Hey! What if we turned the quiz show upside down? We could give the contestants the answers, and they’d have to come up with the questions!”

And this, in a nutshell, was the original pitch to NBC for the show that would become Jeopardy!. This simple change allowed the show to present itself to television audiences without being mistaken for the fraudulent quiz shows of the 50’s. Jeopardy! premiered in 1964, it’s still going strong today and it has been recording new episodes for 47 of the 55 intervening years. It’s beloved, wildly successful and a paragon of good trivia practices.

Jeopardy! with its original host, Art Fleming, in the 1960’s.

So if the question format rule was necessary for Jeopardy! to get green-lit, and the resulting show has been a pillar of daytime TV for five decades, I must think the question format rule is a good idea, right?

No, of course not! The only reason it works was to reduce the rule to a facile gimmick and to build good practices around it.

Why? Let’s think about what providing an ‘answer’ and asking for a ‘question’ really means. Unless it were handled with extreme care, it would violate the fundamental rules of trivia.

To illustrate, suppose I asked, “How many feet are in a mile?” Naturally, the correct response would be “5,280.” What happens if you reverse it? The prompt or ‘answer’ is “5,280”, and you have to come up with the ‘question’.

Yes, “How many feet are in a mile” is still a correct solution. But it’s not the only solution. In fact, there are a theoretically infinite number of questions for which “5,280” is the correct answer. “What number is an example of a j-invariant and a Heegner number in theoretical mathematics?” “What is a Denver-based city magazine?” “What is 2*2*2*2*2*3*5*11?” “What is 5279 added with 1?” And so on.

Then there’s the challenge of enforcing a real, meaningful question. Contestant can’t just focus their energy on identifying the correct answer; they also have to spend energy on the the grammatically correct question. That takes time, and time is precious both for contestants and broadcast television.

In order to make this gimmick remotely work, Jeopardy! quickly threw out the grammatical rules; you could respond with any question form, whether it was technically grammatical. Then, they had to frame their prompts around a simple and thoroughly practical yet often neglected principle of trivia: the quiztaker should be presented with multiple pieces of information pointing to the correct answer – or, in the case of Jeopardy!, the correct “question”. I call this principle “multiplicity”.

Using two or more distinct pieces of information to point toward the same solution is a simple and elegant to focus a prompt on a unique solution – sorta like how two infinite lines with different angles must converge on a solitary point. This practice was one of the best that Jeopardy! popularized, and it’s partly why the trivia remains so solid and engaging: if done right, it can give the audience more than one way to identify solution, and other means of assessing the answer’s credibility.

Thing is, Jeopardy! struggled to adapt multiplicity in their “answer as prompt” format. They tried early on; in the 60’s their “answers” sorta sounded like actual answers and their “questions” sounded more like actual questions. To give one example from a 1968 broadcast: the ‘answer’ was “A type of ship that was the largest in the British Navy at the time of Henry VIII”, and the ‘question’ was “What is a Galleon?”. If you squint, that answer does sound like a vaguely reasonable response to that question.

By the 70’s, the pretense that the prompts were supposedly answers was mostly sidelined, in favor of the flexibility to keep the solutions simple, and the prompts dynamic and multi… multipli… “multiplicitous”? Sure, let’s go with that.

To provide an example from 1975: the ‘answer’ was “John Wayne introduced this Western to TV viewers nearly 20 years ago”, and the ‘question’ was “What is Gunsmoke?” I don’t think people would mistake the prompt for a natural, organic response to the question “What is Gunsmoke?”

Today, at this point in the show’s evolution, the pretense that the prompts are “answers” is virtually abandoned in its entirety, and it’s only reflected in the  “answer in the form of a question” rule, which is just a gimmick and nothing more.

Let’s just be upfront: currently the prompts on Jeopardy! are questions in everything but rudimentary grammatical style, and the rare instances where they do work as ‘answers’ are purely accidental. Furthermore, the solutions are intended as the answers to those questions, and Jeopardy! doesn’t even bother enforcing them to be grammatical so long as they are technically questions. Pretending that they’re otherwise is absurd.

To use a recent 2019 example: if I asked you, “Who is Napoleon?” and you replied, “Ignoring this commander’s experience 129 years earlier, in the winter of 1941, the Germans attacked Moscow”, I wouldn’t think that’s a good answer; I’d think you were a raving lunatic.

No; Jeopardy! is popular because they are good, dynamic, well-constructed trivia questions, for appropriately simple answers. That is the practice you should implement in your trivia, and a good way to do so is adding multiplicity into your questions.

Here’s an example I remember fondly from The Lookout: the round’s theme was “Marys” and the question was as follows: “Serving as the country’s first female President from 1990 to 1997 before stepping down to become the High Commissioner of Human Rights for the UN, what Irish politician used a Simon & Garfunkel song as her campaign theme?”

Now, our team was acquainted enough with Irish politics to know the first female President of Ireland was Mary Robinson. But what stuck with me was the way the question provided multiple paths to find the same answer – if you knew how to use the information in front of you.

Suppose, for example, you didn’t know anything about Irish politics, or female Presidents abroad, or the governance of the UN. You still probably know some of Simon & Garfunkel’s famous songs, and you know the theme of the round is about Marys. If you used these two details correctly, you actually have a moderately decent shot to guess the correct answer.

Because the theme was on famous Marys, you might be able to infer that the answer has to relate to the theme since the question did not. You also might be able to infer that the Simon & Garfunkel song in question also relates to the answer’s name, since the quizmaster conspicuously didn’t mention the song’s title. There is a chance that you can combine these facts and guess that the politician’s name is most likely Mary or some variant, and her last name is possibly Robinson, based on Simon & Garfunkel’s landmark song “Mrs. Robinson”. Without knowing anything about Irish politics, the question still provides a way for the quiztaker to get engaged and persevere to a correct answer – but it’s by no means obvious or guaranteed. (Maybe they guess “Mary Cecilia”, or “Rosemary Sage” based on “Scarborough Fair”.) Furthermore, it provides teammates who are on the fence a way to be more confident in the correct answer, by crosschecking a hunch that it’s Mary Robinson against Simon & Garfunkel tunes they know.

“We’d like to know a little bit about you for our files…”

In summary, multiplicity is a way to follow through on Rules #2, #3 and #4 without compromising Rule #1. That’s why it makes good trivia.

Rule #8: Social Play

When I use the term social play, I mean the organic phenomenon of cooperation and coordination among teammembers to complete a challenge – in this case, answering a trivia question. This is more than just having multiple teammembers to statically increase the odds that one of them will have the right answer; it should also requires that teammates cooperate and communicate to extract not just additive benefits but emergent benefits from their shared information. They should see a real advantage in sharing infromation, discussing, working through the reasoning and building consensus in order to conclude on an answer. This is a large part of what make pub trivia such an enjoyable social activity and a large part of what others miss when they think it’s just inebriation and test-taking.

I can keep this brief, since in some ways, social play is practically a litmus test for the other rules – especially Rule #2, #3, #5, #6 and #7. If questions are engaging, if the quiz rewards the prudent use of a broad set of knowledge, if the quiz is challenging, if the quiz is not pedantic, and the quiz embraces multiplicity, social play should emerge organically. Conversely, the absence of social play is also a symptom which can be used to diagnose lacking trivia practices. Maybe there’s no benefit to social play because the quiz is too easy. Maybe pedantry has discouraged social play by making the required information too inaccessible and arbitrary. Or maybe the quiz lacks the breadth of information to provide multiple ways to reach the correct answer. In any case, social play is a useful barometer of quiz quality – assuming your quiz format is team-based. And getting a correct answer through social play is often one of the most rewarding experiences of trivia.

One time, we received a question which went as follows: “The French call the potato pomme de terre; similarly, the Dutch call the potato what?”

My teammates and I felt a little out of our element, since none of us spoke Dutch. However, my teammate did have some background in French, and could confirm that ‘pomme de terre’ literally meant “apple of the earth/ground”, so we inferred that the Dutch word for would probably be similarly constructed.

After some more pondering, I vaguely recalled that the Dutch word for the fruit orange was sinaasappel or “China apple”, so based on that I said the answer was going to be something-appel, parallel to pomme de terre. What I didn’t know, and couldn’t really guess, is what the Dutch word for “earth” or “ground” would be.

After answering a few more questions, my teammate similarly recalled that the name Aardvark, the African anteater species, derived from the Dutch/Afrikaans for “Earth Pig”, where aard was likely the Dutch root for “earth”. (This also made linguistic sense, since Dutch and English are closely related, and aard bears a striking etymological resemblence to the English “earth”.)

Based on that and that alone, we put down “aardappel” as our shot-in-the-dark best guess after some prolonged deliberation with little certainty and crossed our fingers.

And I’ll be damned, it turned out to be 100% correct. We were ecstatic; the thrill of managing to extract such a correct answer by putting together our sparing and disparate pieces of information was equal to that of winning the whole game.

Rule #9: Themes

Let’s talk about round themes. It’s common practice, yet it’s counter-intuitive when considering the other rules.

Ultimately, trivia themes are a high risk/high reward tact: if used well, themes can greatly enhance the interest and enjoyment of a game; but if used poorly, they can alienate a large portion of the audience and feel unfair.

So why are they so popular, and what makes the difference?

Well, themes are popular because trivia is a brain game (shocker), and brains are generally fascinated by novel connections between pieces of knowledge. Themes provide us with patterns that organize the information we’re consuming and prime us to find a new connection. They engage us, inspire our creative thinking, and make the round more memorable.

However, there is a big caveat: themes must be used well. If used poorly, they can have the opposite effect. They can make the audience disengaged, discouraged and apathetic – the reverse of what they should achieve.

In short, themes in particular must be applied with the other rules of trivia in mind; if not, they can be a negative influence that breaks the rules, particularly #3 and #6. Bad use of themes can make a round too narrow and too pedantic to be engaging, and if they try to make the information more accessible to general quiztakers, it risks making the quiz too easy.

Good round themes, in my view, should be flexible enough to draw from multiple types of knowledge (see Rule #7) and provide quiztakers a tantalizing clue to help them on a tough question: for example, “Hot and Cold” (where the question or answer is related to the words ‘hot’ or ‘cold’) or “You don’t know Jack” (where questions or answers are related to the name “Jack”). Bad round themes force the player to draw on only one type of knowledge, and punish anyone who doesn’t have knowledge in that area: for example, “Sons of Anarchy”, “National Hockey League”, “The Thirty Years War”, “Keeping Up with the Kardashians”, and so on.

There are two slight nuances, though: the first nuance is that you can get away with more narrow round themes if you can narrow your quiztakers accordingly – e.g. it’s okay to make a “RuPaul’s Drag Race” or a “Star Trek” themed trivia night, if that is how you publicize it and entice audiences to come. That’s a good way to celebrate and reward a quirky fandom. But you have to make sure this is clear and well-understood to everyone who would come.

One time, I went to my regular trivia spot (before the Lookout) only to discover that it had been taken over by an Audubon Society themed quiz – and it started about a half-hour earlier than the usual trivia time. This was advertised to local audubon members, but not to me. So when I showed up, the quiz was already under way, and I was too late to join any teams. So I instead opted to do it myself, on a quiz topic I frankly know very little about.

Lo and behold, I wasn’t the only person who made the same mistake, and the other guy and I ended up doing a chug off to determine last place. It was a humiliating experience.

The second nuance is that clever quizmasters can take a round which seems comparatively narrow and inaccessible, and transform it into a topic that draws from more broad pieces of information that’s more interesting.

At the Lookout, the requested round for one week was “20th Century Boxers”. Taken at face value, that’s not a great topic; it’s too focused on knowledge of a sport which most people have just a basic acquaintance with. (Like, they probably know about Muhammad Ali and Mike Tyson; they might know about Joe Frazier, George Foreman and Joe Louis; but Floyd Patterson? Jack Dempsey? What are the chances the average person could distinguish Sugar Ray Leonard from Sugar Ray Robinson? Or Rocky Marciano from, well, Rocky Balboa?)

But the quizmaster decided to get clever; instead of having the questions be about 20th century boxers, he told us that the answers to the questions would all be the nicknames of famous 20th century boxers, even while the questions were about something else. 

For example, there was a question along the lines of “This Namco-produced 80’s classic arcade video game was reportedly inspired by the shape of a pizza pie that was missing a few slices?” The answer, naturally, was “Pac-Man” – nickname of Filipino boxer Manny Pacquiao.

Another question: “Often regarded as his signature song, what famous tune did John Lee Hooker play while performing as a street musician in South Side Chicago in the film Blues Brothers?” The answer is “Boom Boom”, also the nickname of lightweight boxer Ray Mancini.

This is an example of using a theme not as a restriction on the knowledge, but rather as a jumping off point; having personal knowledge about boxers in the 20th century would help you in this category, but not so much that you’ll leave everyone else in the dust. Meanwhile, people who don’t know much about boxing can still rely on the normal question/answer, and they can still get it correct.

I really like this approach; it’s more creative, fair and interesting than playing the round straight.

So feel free to use themes, but make sure that it isn’t at odds with the other rules. And always be on the look out for ways to throw a curve ball and use a theme in an unexpected way.

Rule #10: Fun

And finally, the most important rule of all; the ultimate goal, the one which all the other rules are designed to contribute to. It is also the only rule which you can cite as an excuse for deliberately compromising or even breaking any of the other rules: fun.

Never let it be forgot: we are here to have fun. Trivia is supposed to be fun. We do trivia because we think it is fun. And if one of the aforementioned rules get in the way of a question you think is fun, feel free to chuck it out the window for the sake of rule #10, and a hardy F-U-N.

Easy enough, right? Got it? Good.

But…

I am a firm believer that quizzes should be fun in themselves, because (I contend) good trivia is inherently engaging to its audience. For that reason, I take issue with trivia formats that try to force in frivolous elements to add “fun” factor into quizzes, when they ultimately distract from the trivia itself. The Geeks who Drink pub trivia has a bad reputation for this practice, but the most infuriating example to me personally is HQ Trivia – the mobile app-based trivia game that gained a lot of traffic in 2017 for the novelty of prize money but which felt like a miserable chore to me. Why? Because every time I loaded up the app at the specific time they asked me to, I had to put up with the blithering, try-hard, “whacky” hosts and their limp and grating attempts at improv comedy that insult the intelligence of their audience. And they draw it out to an excruciating length, filling the space between questions with raw pain. It all seems to come from this assumption that trivia is dull and they need to compensate for it with “media personality” but I AM HERE FOR THE TRIVIA! STOP GETTING IN THE WAY OF THE GODDAMN TRIVIA!!!

Anyway.

If you want to make a quiz fun, I ask that you contextualize fun with the following principle, which I call the “curiosity principle”:

  • The Curiosity Principle: The fun in trivia should come from the curiosity and opportunity for learning, and the embrace of knowledge as an ends in itself – rather than a means to an end.

This principle should be the heart and soul of the fun in trivia. Beyond that, it’s what makes trivia a valuable and useful exercise.

All too often in our daily lives we are compelled to “justify” our curiosity and demonstrate that knowledge is a means to an end. That’s how we frame education, and that’s how we frame work. Our culture tends to mock people who have a passion for knowing a subject for the love of it rather than its usefulness are mock: those people are geeks or nerds, and treats that behavior as aberrant. I firmly believe that this attitude is destructive, both spiritually and concretely; it smothers our ability to take joy in life, and it atrophies our ability to learn. And when the time comes when learning is important, and is useful, people who have internalized this attitude will not learn as well, and their confidence in their ability to learn will be diminished. This is an attitude for which pub trivia is a perfect antidote: a space of joy that permits you to be curious and to love learning.

One thing that breaks my heart is inviting my friends and family to pub trivia – this thing that I love – and see them enjoy themselves, only to hesitate returning because they “don’t feel like they can contribute” since they’re “not good enough”.

This seriously bothers me. I love their company; I invite them for their company; but they can’t see why I’d want them to join me if they don’t significantly increase my odds of winning.

They just see trivia as a means to an end; I wish they could see it as an end in itself.

And when people try to distract from it with razzle dazzle, it’s they’re tacitly denying the possibility of that kind of fun: the thrill that comes from the effort of wrangling an answer from bits of shared knowledge, or the satisfaction of getting a question right on a calculated guess, or even the interest in discovering the answer to a question you know nothing about. Those wrongheaded quizmasters cling to this subtext that to make trivia more fun and enjoyable, it requires less trivia, to make it more approachable.

That’s just wrong; the audience for trivia is wide, and most people have the potential to enjoy it if they entered with the right mindset. And I hope quizmasters take it on themselves to fuel that enjoyment with their trivia.


So there you have it; my ten rules of pub trivia. Feel free to comment if you disagree or if you want a clarification. In the meantime, I need to remember how to write shorter blog posts.


Never with a Simple Answer,

Connor Raikes, a.k.a. Raikespeare

Why are We Still Playing Age of Empires II? – The Paradox of AoE Fandom

Tags

, , , , , , , , ,

I am part of the bizarre, paradoxical Age of Empires fandom which is going strong even though it seems to defy all the odds and conventional wisdom.

I’m not saying the game in question (i.e. Age of Empires II) isn’t correctly regarded as a classic, or that it’s remotely weird to have an enthusiastic fanbase around a 20-year-old game. After all, nostalgia is one of the unspoken pillars of the game industry.

No, I’m talking about something different. I can’t even say that the enthusiasm qualifies as nostalgia, because the players aren’t reminiscing or re-evaluating a game; they’re still engaging with it. AoE fandom isn’t just large and devout; it’s active, and ongoing, built around a game that didn’t even come out this millennium. A game that’s so old school, that it uses sprite animation, in lieu of technology as cutting edge as polygon rendering. (Put another way: the graphics tech used in the original Quake is too hoity toity for Age of Empires II.)

lkcg9
This is what we’re working with, people: we’re back to Eadweard Muybridge and the  zoopraxiscope for our graphics tech.

The genre it belonged to, the real-time strategy (RTS) genre, was at one time supposed to be dead or dying in AAA game publishing, and even though recent history has shown the vitality of strategy games, and even RTS games to an extent, there’s no reason to think it would make Age of Empires II relevant now somehow.

The Age of Empires franchise never reached the stratospheric heights of the Blizzard RTS properties- i.e. WarCraft and StarCraft – and it wasn’t bolstered by other spinoff games as those Blizzard titles were. Nor was Age of Empires the most prominent longstanding historical strategy franchise – that honor goes to Sid Meier’s Civilization series. 

The developer that made the game, the venerable and severely underappreciated Ensemble Studios, was fully shuttered over a decade ago, forcing the license into development limbo. And even if Age of Empires II was a critical and commercial darling in its time, we had every reason to expect that it would be overshadowed by its contemporaries and diminished by neglect.

It was supposed to be dead. Well-embalmed, perhaps, and lovingly placed as an altarpiece in the Steam reliquary for fans to gawk at like their old toys in the attic, but dead nonetheless. Even if the franchise were still alive, logically the entry from 1999 should have been replaced by something more contemporary, like 2K’s XCOM: Enemy Unknown reboot in 2012. Hell, StarCraft is widely regarded as the greatest RTS of all time, but more gamers are still probably playing StarCraft II.

Yet in spite of that, since 2013 Age of Empires II (again, a game from 1999) has released not one, not two, but three whole expansion packs, complete with new civilizations, complete with individual bonuses, tech trees and unique units, and four new architectural styles, all still keeping with the 2D sprite animations. Multiple content creators on YouTube and Twitch channels are devoted to Age of Empires II, with a sizeable number of followers. These are not the signs of a fanbase that’s recalling the good times they had with a classic, or trying to deconstruct and analyze a game other people love. They are engaged with it now; they are playing it now; this game is a part of their regular gaming diet.

Less than ten years ago, the whole franchise was halted. There was no meaningful ongoing development, and yet somehow, ongoing development had to be invented for it.

Consider this: on May 14, 2019, I checked the public Steam game stats on current players/peak players in the last 24 hours. Age of Empires II was the 44th most played game on Steam, listed at 11,095 current players, and 15,278 peak players. That current player count was ahead of The Witcher 3: Wild Hunt and Elder Scrolls V: Skyrim, and just behind Payday 2. Peak player counts were far ahead of Total War: Rome II and XCOM 2, ahead of Dark Souls III, and slightly behind Assassin’s Creed: Odyssey. Its current player count was 2.5 times greater than the current players of Sekiro: Shadows Die Twice – one of the most widely anticipated and critically acclaimed releases of this year, which came out less than two months before.

Some of the games above were chosen as Games of the Year; some of them are RPGs, some online multiplayers. Most of them are critically acclaimed; all of them came out in the 2010’s. And a 1999 sprite-animated game is competing, often even outcompeting, in terms of player engagement.

I wished more people talked about this, because it’s weird. Maybe it’s not a particularly new or topical story, but it’s abnormal to say the least, and it deserves our interest.

Why is there such a devoted fanbase centered around Age of Empires II, when by every measure it should have been too old, too neglected and too irrelevant? What does that mean about conventional game industry wisdom, and what lessons can we learn about longevity and fanbase? Am I qualified to answer any of these questions? Certainly not. But I’m gonna try to answer anyway: why are we still playing Age of Empires II?

History in the Making

First of all, let’s start with the obvious: Age of Empires II has to be a really great game.

Yet that on its own is not enough. There are many classics from the turn of the millennium available on Steam; I’ve even played some of them in more recent years (Half-Life, System Shock 2, Deus Ex, Thief, Grand Theft Auto III). But while in most cases I appreciated the experience, I rarely had the desire to keep playing them after I was done.

Of course, multiplayer games typically have more longevity. So maybe it’s more appropriate to compare it with other notable multiplayer games from around the same period, like Counter-Strike, Quake III: Arena, Halo: Combat Evolved or Super Smash Bros. But, you might notice that in each case, the players eventually moved on to newer and better versions of those multiplayer experiences, with better graphics and mechanics and often new entries in their respective franchises.

In this case, simply being a classic isn’t enough. Age of Empires II’s quality has to be timeless, to an extent that should defy belief. It has to be so great that 20 years (and three console generations) have passed, computer processing power is nearly 5,000 times more powerful than it was when the game originally came out, and it’s still somehow the best playing experience for its particular niche of gamers in 2019.

Are AoE fans just weird? Do they accept outdated mechanics as a given and let their games coast on nostalgia?

Well, it’s interesting that we have a useful point of comparison – Age of Empires the original, or more appropriately the Age of Empires: Definitive Edition remake. It’s interesting because it shows a game not only in the same genre, but the same franchise, just two years older than Age of Empires II, and a classic in its own right – and yet it somehow aged dramatically worse. And the noble effort put into modernizing the graphics and tweaking mechanics on the margins for the Definitive Edition in 2017 only underscored how much the gameplay feels like a product of its own time.

age-of-empires-1-hd

Ironically, Age of Empires: Definitive Edition is far more suited to idle nostalgic reminiscing than Age of Empires II, a game which I have played regularly in the past couple years, even though the foundation has changed almost none since 1999.

The fact that I can point to all the niggles in Age of Empires: Definitive Edition – the unit pathfinding is messy, economy management feels unnecessarily like a chore, the tech tree is too convoluted yet somehow the broader strategy seems too shallow – should emphasize how much could have gone wrong with Age of Empires II and how many pitfalls it skillfully avoided. In contrast to its predecessor, every piece of Age of Empires II fits together so precisely that it has barely lost any of its fun factor, even as time should have magnified its flaws.

The gameplay is constantly thoughtful and engaging. And that’s not to mention the dizzying depth you can uncover comparing civs against each other in terms of bonuses, tech trees and unique units. 20 years later and it still feels like there are depths to plumb and quirky, unusual strategies you can employ.

There are important, game making or breaking decisions you can make at any moment. For example: from the moment the game starts, you have to start building your economy, since your production in the early game is critical to sustaining a larger military capacity later on. But it doesn’t take long before an observant player will recognize some important strategic options. There’s a looming decision you can make to build up a smaller army now to rush your opponent’s base, hopefully wreck their economy, and gain the upper hand. But you do so at the cost of reinvesting resources into compounding economic gains for a larger military later, which could hopefully annihilate your opponent. Then you have to weigh both of those options against building up defenses now and fortify your base against an opponent attempting to rush you!

giphy
Communication is very important on team matches.

It provides a good balance of rewarding both broad long-term strategic planning and moment-to-moment tactical skill. It’s challenging, but it isn’t punishing. If your initial strategy doesn’t work out, there’s usually an opportunity to adapt, shift approach and remain competitive.

But good mechanics on their own won’t guarantee a large fanbase and longevity. It also has to find an audience, grab their interest and earn their passionate loyalty. Age of Empires II wouldn’t still be relevant today, if it didn’t succeed at cultivating that audience at the time of its release, and keeping their interest for 20 years. So  how exactly did they do this?

Well, for starters, Ensemble Studios identified and targeted a specific niche of gamers: strategy gamers who wanted the visual and thematic grounding in historical warfare combined with stimulating quick-paced tactical gameplay. In other words, players could start playing for the fantasy of growing and commanding armies like a medieval general, and continue playing because the gameplay rewarded skill, and you could get better if you committed to it, and doing so felt fulfilling as you got to explore more of the game’s strategic depth.

Age of Empires II was very good at easing people into that strategic depth and making itself approachable while rewarding skill and creativity, helping it find its original audience and making the old game more approachable for new players. One of the ways it does so is through the campaigns, which are I think are underrated. At a basic level, they provided an important and perhaps necessary structured single-player element for players to engage with and develop their skills so they could begin multiplayer games more confidently. But that may undersell their secret brilliance. The Age of Kings campaigns in particular were much more thoughtful and cleverly designed than they appeared on the surface, and possibly were an underappreciated contributor to the game’s popularity and success – a fantastic way to introduce new players to the gameplay and variety, so they could gradually build their skills in an RTS, which is a notoriously difficult genre to tutorialize.

Also, much as I’ve roasted it throughout this post, the sprite animation in Age of Empires II has also aged unusually well. No one would mistake it for modern graphics, obviously, but Ensemble happened to develop Age of Empires II in a sweet spot of time, after early RTS sprite animation which looked rough and tacky (see the original Age of Empires or Warcraft/Warcraft II) and before early 3D rendering for RTS games was obligatory, which looked blocky and ungainly (see Age of Mythology or Warcraft III). While other game studios were experimenting with 3D graphics for RTS in the late 90’s, Ensemble Studios chose instead to spend their resources making a 2D isometric RTS look polished and pretty, which meant it would still look appealing long after early 3D graphics grew stale and outdated. (This isn’t unique to RTS games; there’s a reason indie game designers still make games with sprite animation in 16-bits, but they don’t generally make games that mimic early PlayStation-era 3D graphics – though the latter has no shortage of classics!)

See the source image
Age of Empires II (1999): Still quite pleasant to look at, in my opinion.
WarCraft II (1995): Uhhh yeah that looks a little dated.
warcraft-iii-reign-of-chaos-features
WarCraft III (2002): A game which… hey, wait a minute, this *also* looks a little dated somehow; it just looks dated for different reasons…

Age of Empires II is perhaps the best attempt I’ve seen at using bitmapped sprite animation to simulate a fluid, realistic look and feel at an aerial perspective scale. Moreover, the art style gives the game a firm historical verisimilitude, while still communicating important information to the players clearly. The original game had distinct architecture styles for Middle Eastern civs, Western European civs, Central European civs and East Asian civs, while the first expansion added an architecture style for Mesoamerican civs as well – all of which was aesthetically pleasing and based on dedicated historical research, which added to the satisfying experience.

All of this made it inviting for players to take interest in the game, and start playing.

And finally, there’s one more crucial feature in the original game that extended its longevity, perhaps more than the creators might have anticipated: mods.

Sure, the Age of Empires franchise wasn’t unique in having user-made mods created for it, but it and other RTS games stood out in priming its user base with the idea of creating mods, by not only including a map editor with the game but presenting it openly on the front page. Age of Empires II subtly invited players to engage with the mechanics beyond just playing and improving their skills: it encouraged them to experiment with an open map as a canvas. Whether players wanted to use it to build a campaign scenario with the scale of their single player campaigns, or to create a laboratory to test unit skills and matchups, or to just throw a bunch of crazy ideas at the wall for the S&G, the Age of Empires II map editor gave them a user friendly tool to spark creativity – not just in the game, but in the metagame.

I’ll put money down now that there are numerous game designers in the industry today who first got interested in game design messing around with the map editor of an Age of Empires game. More to the point, the map editor and modability helped spawn a cabal of future game modders and content creators who would help keep the Age of Empires franchise interesting for dedicated fans, and help stretch out their engagement. This community, more than any other group, is responsible for bringing Age of Empires back from the brink of oblivion – but more on that later.

Speaking from personal experience, I started playing the Age of Empires franchise not because I enjoyed RTS games specifically, but because I enjoyed history and wanted to play a history game. I wanted to play as Julius Caesar, William Wallace and the like. When I was playing Age of Empires II around ages 10-12, I focused almost entirely on the single player campaigns. First I played through them using the gloriously cheesy cheat codes, spamming Furious the Monkey Boy and/or Saboteurs. Eventually I settled down and played the game properly, only to discover how satisfying it was to actually outplay an opponent.

Eventually, I moved on to other games in the franchise like Age of Mythology and Age of Empires III, but I always looked back fondly on Age of Empires II. Something about it made it stick out in my memory more, and made it my favorite entry in a series near and dear to my heart. It was an irreplaceable part of my growth as a gamer, and it was the game I needed back when I was 10.

That’s why several years later I saw the game available on Steam and started playing it again as a young adult, and discovered to my dumbfounded surprise that Age of Empires II not only held up, but it had vastly more quality and depth than I remembered. And now, I continue to play it on a regular basis – and look forward to the new releases facilitated by the franchise’s strange renaissance on 2010’s.

But therein lies the rub; the second part of the question, which is at the heart of the paradox, and the strangeness of its renaissance. Why did the revival in the Age of Empires franchise revolve around the original games, rather than a remastering, or a reboot, or a sequel?

Put another way: “why are we still playing Age of Empires II” is asking more than just “why would a 20 year old game still be fun to play”. We also have to wonder why it wasn’t replaced by another game that was better, or at least more engaging to a modern audience?

Any game that was this good, whose goodness was so widely acknowledged, should have naturally driven resources into developing a remaster, a reboot or a sequel which was, if not better, at least more accessible and fun for current players. The love of the original XCOM: UFO Defense created the foundation for an XCOM reboot. The love of StarCraft created a foundation for StarCraft II. But the later Age of Empires sequels didn’t generate as much interest and passion, and the fanbase rallied around Age of Empires II as the industry dragged its feet. As of 2019, development on a new installment in the franchise, Age of Empires IV, has only just begun. (Meanwhile, Age of Empires II has had three official expansion packs since 2013.)

Why did it take so long, if the game was so popular?

It implies that Age of Empires II was in the peculiar position of being a great game, that was property of an organization that was oblivious as to why it was great, and would have effectively written the whole thing off if they weren’t proven spectacularly wrong.

Exeunt Ensemble

Here is what I can gather from the broken whispers of history heard amidst myth and legend and/or claims on the internet that use broken web links as citation.

Ensemble Studios was founded in 1995 by brothers Tony and Rick Goodman, and their friend/colleague, John Boog-Scott. Tony Goodman and John Boog-Scott were already established in software entrepreneurship through an unrelated company called Ensemble Corporation, which specialized in management and reporting software. This seems like the natural setup to a punchline about Ensemble Studio games being as fun as data entry, but they let the half-assed comedians of the world down by making really damn good games.

It seems that Tony shared a passion for computer games with his designer and developer brother Rick, and in 1995, they recognized a big change that would make developing PC games a lot easier: Windows 95.

The breakthrough with Windows 95 is that it merged its operating system (OS) with a Graphical User Interface (GUI) shell by unifying MS-DOS with Windows. Prior to this, GUIs were often (but not exclusively) sold separately as an add-on or peripheral. This made game development more difficult, forcing game designers to write drivers and test software for various mouse and sound card combinations, and continually release updates to accommodate new drivers. By contrast, games could rely on Windows 95 and later 9x OS systems to resolve hardware systems at the OS level, which eliminated the need for game studios to write drivers to run PC games. The massive popularity of Windows 95 as the OS standard for personal computing meant fewer barriers to entry and more resources to spend creating better gameplay.

After their founding, the new Ensemble Studios went to work developing a concept for a historical strategy game, inspired by Sid Meier’s Civilization but with different gameplay. To this end, the Goodman brothers reached out to their college friend whom they met in a Board Game club at UVA for professional advice: Bruce Shelley, a.k.a. the co-designer of Sid Meier’s Civilization (the one that isn’t named Sid Meier). That’s one hell of a friendly connection. It’s like getting tips on your fantasy novel manuscript from your D&D buddy, George R. R. Martin.

By 1997, Ensemble Studios eventually hired Bruce Shelley and also respected game designer Brian Sullivan, and developed the original Age of Empires. I’ve said before that it hasn’t aged as well as its sequel, but that’s despite (and probably because) the fact that it was a groundbreaking game on the cutting edge of the RTS genre at the time. The interface layout developed largely by Rick Goodman popularized the use of isometric perspective and diamond mini-map, placing unit/building commands and mini-map on the bottom of the screen rather than the side, and adding civilization names/age to the top screen bar. Many of these features overtook and replaced the top-down chessboard perspective seen in Dune II, WarCraft/WarCraft II, and the original Command & Conquer.

Age of Empires was also notable for its commitment to challenge without “cheating” – in other words, the enemy AI would compete on a level playing field and challenge the player through tactics, strategy and management, rather than having a “buff” in resources or unit strength, or having access and responding to player information which it didn’t discover fairly. This commitment against AI “cheating” in the short term meant buggy and easily exploitable AI, but in the long run meant the gameplay felt more fair, and the unit AI improved more quickly as a result.

Ensemble Studios released Age of Empires to great critical and commercial success through a long-term publishing deal with (who else?) Microsoft, who helped their early success in ways than one.

At this moment, Microsoft’s relationship with Ensemble Studios might have seemed like a great match. Ensemble had a strategic partner at the epicenter of PC gaming, and Microsoft had a partner in game development for the PC market, which could drive Windows sales and expand user engagement with the Windows platform. Prior to this, Microsoft’s game publishing was mostly relegated to flight simulators and educational software, so perhaps Age of Empires in premise made it easier for Microsoft get on board, because Age of Empires is arguably an educational game – in the best way possible.

Rick Goodman eventually left to found his own game studio, Stainless Steel, which would eventually create a later competitor to the Age of Empires franchise – namely, Empire Earth. But the remaining core stayed and developed the second installment, Age of Empires II: Age of Kings. Every innovation in the original was further refined in Age of Empires II and the Conquerors expansion, adding even more gameplay elements which are now considered crucial to the genre: the idle villager mechanic, which identified an inactive economic unit; smart villager AI, so a villager would automatically pursue an economic task after it built the associated economic building; marketplace trading, to sell and purchase resources, with dynamic pricing that responded to supply and demand changes; unique units; unique techs; and civilization-specific sound bites, based on the language of the culture in question.

The game and expansion was released to even greater critical acclaim and commercial success, and was widely regarded as the high point of the franchise – a reputation that has only strengthened with time given that we’re still playing it.

So what happened? Why didn’t the Age of Empires franchise go from strength to strength? What went wrong?

Let’s start in the year 2001, less than a year after the Conquerors expansion. That year, Microsoft acquired Ensemble Studios in full. Again, no immediate sign of trouble yet; up to that point, Microsoft greatly contributed to the success of Ensemble Studios and their partnership seemed mostly productive.

But it was also the same year that fundamentally changed Microsoft’s interest and approach in gaming, because it was the same year Microsoft released the original Xbox. Indeed, the platform that drew their attention in the 2000’s was not the PC market, which they already dominated by 2001, but the console market, where they had potential to grow, and compete for a different kind of gamer: stimulated, action-oriented core gamers.

This is speculation, but I highly suspect that Microsoft’s entry into console gaming probably led them to be more hands-on and strategic about game publishing, and now that they owned Ensemble in full, they had more leverage to assert their agenda. It didn’t impact Ensemble Studios overnight, but overtime, Microsoft’s relationship with Ensemble changed, because real-time strategy was neither action-oriented genre nor well-suited to consoles.

This was around the same time that the looming specter of 3D finally came home to roost for real-time strategy. For a long time, the scale of RTS games allowed them to get away with 2D sprite animation so long as they were animated well. But by 2001, the leap into 3D rendering and environments was inevitable. Ensemble Studios and Blizzard already had their first 3D games under active development, to be released the next year.

In the case of Ensemble Studios, they chose the spin-off Age of Mythology, so they could experiment with a less grounded, more over-the-top game before applying the lessons learned to the main Age of Empires franchise. Set in a sort of extended universe of Greek, Egyptian and Norse myth, Age of Mythology was a game that allowed the player to summon meteors to crash down on their opponent, train cyclops that could pick up an enemy spearman and throw them several meters to their death, and (in the expansion) release a 100 foot tall titan from Greek/Egyptian/Norse hell and wreck your opponent’s base.

Age of Mythology has been criticized for not being as elegant or tight as Age of Empires II, but it was fun as hell, and anyone who would argue against that deserves to get insta-frozen by a frost giant and headbutted into a cliff-face by a minotaur. Accordingly, it was also a critical and commercial success.

But it did start a trend that may have doomed the series in the long-run: over time, Ensemble’s graphics tech became more advanced and expensive to create, but the player base remained steady and the critical reception if anything turned slightly downward. It was more than just rendering 3D polygons; in some cases, they even simulated environment and destruction physics in gameplay, rather than pre-rendering the animation. Anything less would have made the game feel more, well, gamey.

In short, costs of development rose with the transition to 3D, while copies sold held steady; and as the gameplay moved away from the precision honed by 2D games, critical reception became more ambivalent. The release of Age of Empires III in 2005 only continued this trend. It was ahead of its time and its competitors in graphics tech and use of a physics engine – cannonball fire could literally blast multiple units back and have them flying in all directions – but it didn’t save the game from the weakest critical reception of any main release of a franchise, and units sold was more or less in line with previous entries. By most objective standards, the game was still a critical and commercial success, but with such anticipation for a new game, in a genre that needed to prove itself in the eyes of an increasingly skeptical publisher, anything less than a global phenomenon made the game a disappointment.

Meanwhile, Microsoft’s Xbox was nearing the end of its lifespan and… well, it didn’t make a profit, but for a first console in a competitive market is wasn’t a bad start at all. It outperformed the Nintendo GameCube and Sega Dreamcast over its lifespan, and if its success in North America was matched in the international market it would have been a big success. That’s all the more impressive considering it was competing directly against the PlayStation 2, which had a head start both in release and in game library, and which ultimately went on to become the best-selling game console of all time. By that standard, second place wasn’t that bad.

More relevant, however, is that Microsoft had internalized lessons, some of which they were attempting to implement with the Xbox 360 – and would do so to great success: strong exclusive franchise titles, online multiplayer support, action-oriented games (especially shooters), good third party support for cross platform games, and so on.

Again, speculating slightly, but this was around the time that Microsoft started to succumb to some of the industry’s “conventional wisdom” and “truisms” during the mid to late 2000’s: “PC gaming is on the downturn; PC-specific genres are becoming irrelevant (unless they were MMO); RTS games have little future; gamers don’t want strategy games, they want action games and shooters; if real-time strategy can’t reinvent itself then it will die out as a genre.”

(Some of this is ridiculous in retrospect, but people really did believe this at the time.)

In any case, Microsoft put Ensemble Studios under growing pressure to diversify and justify their expense. Though they might not have known it at the time, Age of Empires III would be the last main entry in the franchise.

Ensemble Studios worked instead on other projects unrelated to the Age of Empires franchise, which increasingly took cues from Microsoft’s tacit benchmark for a successful franchise in video games: let’s imagine, a game that was an action-packed first person shooter, to attract the large target base of casual/core gamers; a game designed specifically for console gaming, and which could not only be closely identified with Microsoft’s Xbox system but which could build Xbox’s public brand; a game that could not only drive Xbox sales, but would attract engagement with Microsoft’s Xbox Live online multiplayer service; a game that could make a ton of money off of its uncomplicated gameplay and visceral entertainment. I wonder what such a hypothetical super-game would look like…

h2_001-93a359972d2348f4b6bb07ae5f3732dc
Oh, right.

What I’m saying is, the most high profile Ensemble Studios projects after Age of Empires III eventually rebranded with the trappings and lore of Halo, Microsoft’s golden franchise, in an act that seemed slightly like a baldfaced attempt at appeasing a publisher losing faith.

The first of these was a planned Halo MMO which was eventually cancelled. This project however grew the headcount of Ensemble Studios to an ungainly level, and they didn’t downsize and readjust after the project failed, making the expense of Ensemble Studios even harder to justify to Microsoft.

The second of these projects was an attempt at half-way compromise between Ensemble’s RTS roots and Microsoft’s corporate vision which only served to demonstrate why it wouldn’t work: Halo Wars, a cross-platform (i.e. console and PC) real-time strategy set in the Halo universe.

Consider all the contradictions Ensemble Studios had to resolve in order to make this game playable, let alone popular. They had to create a game that appealed to both Halo fans, from a more casual shooter console player base, and Age of Empires fans, from a complexity-oriented strategy PC player base. They had to sell impulse-driven Halo fans on slower-paced resource gathering and base building, while selling RTS fans on more austere stripped down mechanics to accommodate consoles; and while we’re at it, they had to sell RTS fans on imprecise controller inputs over the far more intuitive and accurate keyboard and mouse inputs. They also had to sell Age of Empires players with a presumed interest in grounded historical warfare on the histrionic space opera lore of Halo; and even if they’d be open to the soft sci-fi lore of space marines in power armor colonizing exoplanets fighting an advanced quasi-religious and mystical alien race, mired in its own factional politics… well, Halo Wars had come to the party about a decade too late, because those RTS players already had longstanding franchises, like, I don’t know, StarCraft, or Warhammer 40K, or a few dozen more imitators. (And those franchises had more lore, more depth and existing multiplayer bases!) Hell, the first real-time strategy in the modern sense of the word was Dune II, pretty much the ur-example of soft sci-fi lore focused on space colonization and factional politics!

211512-emag_halowarsqa_3_4cc00
Also, one suspects the reason they didn’t have the Flood as a playable faction is because they would have resembled the Zerg too much, and then the StarCraft comparison would have been too obvious. Not that it would have made a difference.

Halo Wars wasn’t a bad game, nor was it a dramatic critical or commercial failure. But there was zero chance that it would revitalize faith in the studio and save them from the brink. If the idea was to make RTS commercially viable for consoles and bring the Halo fanbase into the Age of Empires fold, it was doomed from the start. Alternatively, if the idea was to provide a case example why Ensemble couldn’t integrate and “synergize” with Microsoft’s strategic direction or their most popular properties, it worked splendidly. Microsoft already had plans in the works to close Ensemble before Halo Wars officially launched, which shows just how little confidence they had in the company.

And like that, Ensemble Studios was shuttered. All non-essential staff were laid off and the rest were moved to new studios and organizations to maintain services for existing games. For fourteen years, Ensemble Studios created a phenomenal reputation of critically beloved games with large fan bases. Over their lifespan every game they made was at minimum decent, and most of them were innovative genre-defining classics. Microsoft retained the rights to the Age of Empires series while key figures – most notably Bruce Shelley – left to found a new studio, never to work on the franchise again.

And this circuitous drama is why Age of Empires fans never got a game that overshadowed Age of Empires II: because its developer came under pressure to reinvent the genre it perfected, and ended up the sidelining franchise for projects that only facilitated its own plug-pulling.

On one hand, this was an early case of a talented beloved developer being shut down by a publisher after struggling to adapt to the publisher’s trend-driven expectations (which has become a big theme in the game industry recently). Strange as it sounds, the PC company lost faith in PC gaming. On the other hand, this was also a case of the publisher (and perhaps even the developer) misjudging the game, and severely underestimating the devotion of the fanbase. Because after the closing of Ensemble Studios in 2009, many of the underlying assumptions and “industry logic” that motivated the decision to close the company (rather than, say, downsizing and/or letting it go independent) were about to be proven wrong.

New Strategy

So that conventional industry wisdom in the 2000’s I wrote above? Well, a lot of it turned out to be pretty false.

First of all, the idea that games being released on multiple platforms would ultimately help console gaming turned out to be wrong. On the whole, I’d say PC gaming has benefitted more and been on the upturn in the decade, taking advantage of its natural edge in digital distribution and online streaming. Console games, on the other hand, have become more stagnant, and their success and failure largely depends on their ability to publish exclusive games.

Second, if you paid close attention in the early 2010’s, you’d quickly realize that there was still a huge market for strategy games, regardless of their exclusivity to PC games. In 2010, StarCraft II released to critical acclaim and commercial success (obviously), but beyond that, the Total War franchise kept going from strength to strength, Civilization kept going from strength to strength, and the XCOM reboot was a massive hit. This is in stark contrast to games that tried to adapt the RTS to more modern action-oriented playstyles. Double Fine’s Brutal Legend – a 2009 game best described as an attempt to covertly smuggle RTS gameplay into an brawler action-adventure for consoles – was met with lukewarm reception, and never received the cult classic status that other Tim Schafer-directed titles received. The Bureau: XCOM Declassified, 2K’s original plan to revive the XCOM franchise, was ironically completely overshadowed by the more faithful remake that was released as a hasty side project.

The one earth-shaking innovation to the genre that took the world by storm only confirmed how naturally engaging real-time strategy was when it was stripped down to its basic parts, and proved the potential market for strategy games was much larger than any publisher knew. All they had to do was make it commercially accessible and publicize the gameplay. (And naturally, the innovation came bottom up from the fans themselves.) I’m talking of course about multiplayer online battle arenas, or MOBAs, such as Defense of the Ancients (DOTA), League of Legends, Heroes of the Storm and Smite.

For four straight years, DOTA 2 (a Valve-produced sequel to a Warcraft III mod… even though Warcraft is a Blizzard property, not a Valve property) was the most concurrently played game on Steam, until it was replaced by a new disruptive upstart genre starter, PlayerUnknown’s Battlegrounds (PUBG). Beyond its large player base, MOBAs were instrumental in promoting an ascendant business model, relying on free-to-play online multiplayer games married closely with streaming services and eSports promotion. It also popularized gameplay based around unique heroes with specialized abilities competing on teams, for asymmetrically balanced gameplay. Ironically, it was the FPS genre that ended up taking big cues from this new RTS, ultimately borrowing mechanics and concepts to create (or at least formalize) the hero shooter genre, as seen in titles such as Overwatch and Paladins.

1370650796565633196
“Real-time strategy is a dying genre…”

None of this guaranteed that Age of Empires II had a market in the 2010s; I only bring it up as a sign that strategy gaming had a lot more vitality and commercial viability than Microsoft seemed to believe at the time of Ensemble’s closure.

So what was Microsoft’s plan for the Age of Empires franchise?

Well, there was the abortive attempt to make Age of Empires into an MMO real-time strategy, Age of Empires Online, which introduced the bold innovation of grind and freemium gameplay – all in replacement of the dynamic tech tree they had to jettison to produce it on the scale of an MMO. As if to emphasize the disconnect with their audience, they gave Age of Empires – a franchise built around a well-established aesthetic of historical and martial verisimilitude based on academic research – a bright cartoony art style. They might as well have blared an alarm and flashed a bright red sign for the fans saying, “dumbed down”.

Then there was the attempt to sell Age of Empires as a sort of mobile online multiplayer PvP strategy tower defense… yeah, let’s do it a service and say no more about it.

But most importantly for our purposes, Microsoft tapped Hidden Path to release Age of Empires II: HD Edition onto Steam, in April, 2013, while trying not to snicker at the mention of “HD”.

By every critical standard, this HD ‘remake’ was humdrum and, well, hands-off. I would describe the HD Edition charitably as “preserving the charm of the original game”, and uncharitably as “a lazy, minimal effort that did almost nothing noticeable to enhance the game or modernize the graphics”. It wasn’t a proper remake of Age of Empires II; it was Age of Empires II, with the sprite animation and all the cobwebs still intact, only it was available on Steam with online multiplayer support and – oh, now you could display it on higher window resolutions. Wow, slow clap. They use the same cheap looping fire gif copy-pasted over a building to show it’s taken damage, but now we see it in widescreen!

For all intents and purposes, Hidden Path released a version that kept the original wrinkled gameplay and graphics functionally intact, while making sure a modern PC didn’t explode trying to run it.

And ironically, it was perfect. Just not in the way the developers intended. It’s possible that a more involved remake would have had less of an impact, because it might have distracted players from remembering the raw quality and solid foundation of the original. Hidden Path’s hands-off remake unwittingly invited fans to be more hands-on, embrace the game as their own, and create a life for it which the developer didn’t plan on.

See, Hidden Path’s lackadaisical effort reflected a belief that players would want to appreciate Age of Empires II as a relic of the past. That’s not necessarily bad; after all, I pick up similarly wrinkled classics on a Steam sale. I just want to emphasize that their intent was clear: make an old abandonware made compatible for digital distribution, so people can reminisce. And more cynically, maybe they thought it might be useful to drive sales in the aforementioned new games under the Age of Empires license.

What they probably didn’t expect to uncover was an existing fanbase so large, devout and hungry that the 1999 video game would have a second life to rival its first in scale.

At some point during the development of the HD edition, before the release, Hidden Path discovered, to their presumed astonishment, that not only was there was an active AoE Fan mod community for Age of Empires II, but they already made their own fanmade expansion to a game that was over a dozen years old!

The Forgotten Empires, as it was known, featured fully fledged campaigns, five new unique civilizations, new maps, units, new game modes, a larger population limit, gameplay/balance adjustments and online multiplayer support (albeit not on Steam), and posted it slightly before the HD Edition.

Remember what I said before about the mod community keeping the love of Age of Empires II alive and reviving its relevance in the modern era? Well, this is why. That is what fan engagement, deep gameplay and a welcoming attitude to fan mods did for Age of Empires II, when businesspeople who had ultimate decision-making power were about to give the game a dismissive treatment.

Thankfully, Hidden Path decided the best course of action was to elevate this fan project and give it the studio’s official blessing. Hence Hidden Path and the mod makers (subsequently incorporating and calling themselves as Forgotten Empires LLC) worked to integrate their expansion in with the HD Edition and release it as Age of Empires II: The Forgotten.

That would be a charming enough story to publicize in the game industry as it is, but it didn’t stop there. The unexpected expansion revived fan interest in the title. And with new civilizations and gameplay tweaks, and new ways of streaming and sharing content to the game’s fans, the expansion also led to the rise of new content creators built around the game to share information on civ advantages and strategies. This generated more interest in the game, and created a virtuous cycle that kept the game community proactive. An active community means more players which means creators and consumers for mods which means more content for creators and streams which means a more active community – and more reason for the industry to invest resources into it.

Just to illustrate the point, below is 40+ minute long video from a dedicated AoE content creator – just studying the strategy and mathematics behind the market in Age of Empires II. (This guy has over 160,000 subscribers, too!)

Age of Empires II has had two more expansions since then, developed and released by the same team, and the second wind lifted other games in the franchise as well. Age of Mythology was re-released on Steam in May 2014 and got its own expansion in 2016. As I previously stated, the original Age of Empires got an HD remake in 2018 (but like, an HD remake for real this time). And Microsoft is currently working on similar definitive editions of Age of Empires II and Age of Empires III, as well as a brand new full installment in the franchise, Age of Empires IV.

Which makes it all the more impressive that Age of Empires II managed such an impressive comeback, and made itself relevant again – old age notwithstanding – in the face of all those who didn’t believe in it and weren’t taking advantage of it. It’s success almost singlehandedly revived investment in the franchise as a whole – in a more thoughtful and dedicated way than, say, releasing a AoE branded mobile Game of War knockoff.

Conclusion

There’s a flawed tendency in media management and investment classify titles in terms of “genres” and “trends”, in a way that is neutral to game quality or market niche – mostly because “genres” and “trends” appeal to large investment projects. In theory, if you could reduce the success of a game to market trends that made a particular genre popular across the market, then you could guarantee a return for investors. By contrast, you can’t know the quality of a game before you’ve already put in the investment of time and money, and market niches by their nature project upfront limitations and ceilings on the audience for a particular game.

But media franchises don’t become popular because impersonal trends made a genre trendy; more often than not, they become popular because a great game found a sizeable audience of people who, in some way, always wanted it, even if they didn’t know it. When a gameplay mechanic appealed to some aspiration or fantasy, it’s best not to attribute it to a market trend; instead, you should focus on why players love it, so you can maintain that joy and even enhance it. Chances are, that aspiration will be around for longer than you think.

The fans and players for Age of Empires II and games like it were there – dormant, perhaps, and bemused by the loss of the developer which made their games, but there nonetheless, had Microsoft or other publishers looked in the right places.

Instead, Microsoft made its judgment about Age of Empires according to a superficial assessment of the RTS genre and the market trends that were supposedly turning against it. They were either ignorant of, or apathetic to, the underlying quality and the devotion of the Age of Empires fanbase. And Ensemble themselves may have messed up to by buying into the notion that they had to give the franchise more spectacle in order to keep it relevant.

Even when the industry, the owners and the march of time and progress all dictated that Age of Empires II should be obsolete, preserved only (if at all) as a museum piece to ogle at and study rather than to enjoy, fans managed to find one another and revive this game, and now feel just as thrilled by it as when it originally came out. The game was so good, so deep and so precise in its gameplay, that it is as fun to play in 2019 as in 1999 – and ironically, the neglect only served to highlight the rare timelessness of Age of Empires II, which showed itself without a contemporary successor to steal the spotlight.

So there you have it: my answer to why a large number of gamers (and I among them) choose to play a 20 year old game, rather than the latest high-budget offering. It’s the historical real time strategy that managed the feat of being timeless. And there’s still much the game industry can learn from the paradoxical commitment we the fans still give to this history-making classic.

Wololo*, indeed.

Always Bad at Rushing,

Connor Raikes, a.k.a. Raikespeare

*Yeah, technically wololo is an original Age of Empires reference, but it’s still iconic for fans.

The Eleventh Hour: The Enduring Legacy of the First World War

Thus at eleven o’clock this morning came to an end the cruellest and most terrible War that has ever scourged mankind. I hope we may say that thus, this fateful morning, came to an end all wars…

I will, therefore, move that this House do immediately adjourn, until this time To-morrow, and that we proceed, as a House of Commons, to St. Margaret’s, to give humble and reverent thanks for the deliverance of the world from its great peril.

– Prime Minister David Lloyd George, November 11th, 1918

On this day 100 years ago, around sunrise over the Western Front, the German Empire and the Allies agreed to the terms of an armistice. The terms of the armistice were dictated mostly by Ferdinand Foch, French Marshall and Supreme Commander of the Allied forces.

The terms of the armistice amounted to Germany admitting defeat; in return for a ceasefire and a promise not destroy infrastructure, the Allies would occupy the Rhineland, the Germans would surrender any aircraft, warships and military supplies; all allied POWs would be released, while German POWs were to remain in custody; and the Allies would be allowed to continue their naval blockade.

Alas, Germany had no leverage left to bargain with. Their front in the West had collapsed to the overwhelming force of the Allied offensive, and their country was in full blown revolt. Three weeks earlier, German Admiral Franz Ritter Von Hipper issued his infamous naval order for the German Imperial Navy to instigate a decisive battle with the British Royal Fleet in the North Sea on October 24, 1918. Every sailor understood that it was a suicide mission, made in desperation to end the German cause with a final blaze of glory rather than a slow and ignominious defeat. On the night of October 29, crews in Wilhelmshaven refused to obey orders and engaged in mutiny. The German navy coerced the sailors to step down by aiming their own torpedo boats on the ship, but the fury of the sailors was only stalled. By the time the battle squadron docked in Kiel on November 3, the crew were in outright rebellion. Supported by a militant leftwing group that split from the German Social Democrats, the sailors spearheaded a revolution that turned not only against the military, but the whole Imperial State. “Peace and bread” was their cry; they demanded a swift end to the war, and the overhaul of the German Empire with a new socialist republic.

The revolt spread to nearly every major city in Germany, and even the reactionary conservatives of the Empire knew the end was nigh. If they were to prevent Germany from going the way of the Bolsheviks in Russia, they had to slowly give up the Kaiserreich.

Two days before the armistice, Kaiser Wilhelm’s son Maximilian unilaterally declared his father’s abdication and the annunciation of his own right to the crown. He handed the office of chancellorship over to Friedrich Ebert, a Social Democrat, to create a provisional government to form a new Republic.

In the previous months, Bulgaria, the Ottoman Empire and Austria-Hungary had already agreed to an armistice. Germany was alone. Their military, their government and their people’s will were all broken. And their foes were wounded enough to pursue full retribution if they did not surrender. So four signatories of Germany agreed to the harsh terms of Ferdinand Foch, leaving themselves at the mercy of the United Kingdom, France and the United States.

On 11am, November 11, 1918 – the eleventh hour of the eleventh day of the eleventh month, the Armistice came into effect. The Great War, the First World War, had finally come to an end after four long years. The world they had known only half a decade before was in ruins. For the allies, it was a victory without glory, assembled from the broken bodies of millions dead. For the Central Powers, it was an unqualified catastrophe. The modern world we know emerged bitter and hardened from the ashes, severed violently from the centuries of tradition before, never to return.


I’m an American, born and raised, and broadly speaking Americans have an awkward, ill-defined relationship with the First World War. The United States was mostly spared the harrowing trauma of the war felt by the other European powers. The United States only joined the war in 1917, and by the time they could deploy their troops in the Western Front, the opposing Central Powers were nearing collapse, and their involvement was limited to the final months of the war. They did not experience the brutal stagnation of Verdun or the Somme. The constancy, the futility of the innumerable dead hadn’t affected them, as it had in the United Kingdom, or France, or Germany. The United States entered, and swayed the outcome decisively toward an allied victory; to Americans the war was a small triumph, followed by a polite disengagement from international affairs. In the cultural consciousness of the United States, WWI was remembered mainly as the war that preceded WWII, and had little identity of its own.

This is not so in Europe, where the First World War’s traumatic futility still looms large. Their memories are still lined with muddy trenches, laced with barbed wire, guarded by machine gun fire and ringing with the sound of artillery shells in flight.

So consider this a primer of the enduring legacy of that war a hundred years later, for those who want to know why this centenary is so important, and how it shaped the world of today.


The End of Monarchy: World War One brought a definitive, violent end to hereditary monarchy as the seat of power in Europe. In doing so, it completed the project that began 125 years earlier in the French Revolution: the shift from title-bearing kingdoms toward the modern nation-state: countries that drew their sovereignty not from right of lineage or the ordainment of God, but rather from the public will and the identity of the citizens. What monarchies remained were more akin to “crowned republics” – a term applied to countries like the United Kingdom, where the monarchs were limited to ceremonial and symbolic power.

Some may presume that the move away from monarchy toward republican nation-states was inevitable, but I disagree. Of the six Great Powers in Europe at the outbreak of the war, four of them were monarchies that were absolute, or otherwise consolidated considerable executive power in the hands of the Crown: Germany, Austria-Hungary, Russia and the Ottoman Empire. Coincidentally, all four of those governments would collapse as a direct result of World War One – even Russia, which ostensibly fought on the winning side.

This was a major shift in the way in which citizens related to their government. It ended a brand of political governance that dominated Europe from the imperium of Augustus Caesar onward. As late as the 19th Century, many envisioned a plan for unity and peace in Europe not by common relations between people but through closely intertwined monarchies. But from the Great War onward, politics was unmoored from lineage, and the opportunity for leadership extended to all – for better or worse.

 

A New Era of Nationalism: It’s well known that the aftermath of World War One precipitated the rise of Fascism, most notably in Germany and Italy, but this wasn’t the only significant development in Nationalism as a result of the war. World War One was also a major victory for nationalist movements in the empires that collapsed as a result of the war: Poland, Yugoslavia, Finland, the Baltic States, Hungary and Czechoslovakia all gained independence as a direct result of the war, becoming either new republics or new constitutional monarchies.

In addition, the Sykes-Picot Agreement led to new divisions of the Ottoman Empire for European control, and in doing so created the borders of Syria, Iraq, Jordan, Arabia and the British Mandate of Palestine. These borders would create considerable ethnic and nationalist tensions through the 20th century onward, from the Israeli-Palestinian conflict to the rise of ISIS. (The culpability of the drafters for the later conflicts is greatly debated by historians.)

Accordingly, the First World War changed the character and goals of nationalist movements throughout Europe. Where many earlier nationalist movements before the war were built motivated by goals of republicanism, nation-building, and democratization, after the war, nationalism was irrevocably tied to the goals of authoritarianism, totalitarianism and racism. The legacy of old nationalism was the conception of the modern nation-state; the legacy of new nationalism was racial violence, war and genocide.

 

Cynicism and Irony: The First World War was a major turning point in arts and culture, particularly the way in which citizens related to authority and tradition. It consolidated Modernism as a movement that would reject both the movements of Realism and Romanticism that dominated the 19th Century. During the war, Dadaism came into vogue, an absurdist, anti-art movement that ridiculed the society that created such destruction by making art that they perceived as appropriately irrational and meaningless. After the war, veterans recounted the war in vivid horror, ridiculed the decadent bourgeois attitudes of the world they returned to, and envisioned a world that would not proceed to progress, but rather to oblivion. The works of Ernest Hemmingway, F. Scott Fitzgerald, Fritz Lang, Otto Dix, Wilfred Owen, Max Ernst, Erich Maria Remarque and Aldous Huxley, to name just a few, owe a debt to the First World War in influencing their unique perspective.

More broadly, the war normalized cynicism and irony among the broader population. Citizens learned to distrust the competence and prestige of their current political leadership. Some put their faith in bold new leaders, some of whom were militant extremists; the rest learned to cope with raised eyebrows and shrugs. That had a dramatic influence in the way we think about politics and society today, for better or worse.

 

Democracy, Autocracy, Capitalism, Communism: As the question of Monarchy vs. Republicanism came to a close, two new political questions emerged to the forefront. The first was Democracy vs. Autocracy. The second was Capitalism vs. Communism.
We typically associate this conflict with the Cold War following World War Two, but that understates the immediacy with which the other global powers were threatened by the rise of Communism. World War One turned the International Socialist Movement from a looming activist movement on the margins to a full geopolitical player, now in total control of a great power and its military. The immediate reaction came in the form of the First Red Scare, when hysteria in the United States led to the criminalization and imprisonment of leftwing personnel. But even after the initial fear died down, politicians in Europe and the United States still had to confront the newly emboldened movement, with various responses.

In some cases, they accepted Labor Union-backed parties into the mainstream, and appropriated some of their ideas and reforms to cultivate support among the people and prevent them from turning toward the insurrectionist stance of the Bolsheviks. In other cases, they used the threat of Communism as an impetus to expand state control, and beat leftist groups into submission. When some of those anti-communist movements turned to outright totalitarianism in Germany and Italy, some Democratic nations were sluggish to respond, believing that they would serve as a bulwark against Marxism.

I won’t go into too much detail, except to say that it took just less than 75 years after the First World War to effectively resolve the political question of Capitalism vs. Communism. The other question, Democracy vs. Autocracy, remains unresolved.

 

Science, Technology and the State: The First World War, and the Spanish Flu that it indirectly spread near the end of the war, grew the relationship between the government and scientific research. Well-known were the military that were designed to kill and maim and otherwise break the gridlock: tanks, airplanes, gas warfare, machine guns, u-boats, advanced artillery and flamethrowers. Less remarked, but no less important, were the technologies with significant peace time use: the growth of wireless communication technologies, the field telephone, blood banks, advanced surgical techniques such as wound cleaning and plastic surgery, and the largescale study of mental illness.

In a few cases, the power of supporting life and destroying it became blurred with new technologies. The most infamous example of this was the case of German Chemist Fritz Haber. He’s perhaps most notorious for his work developing the techniques to create chemical agents such as chlorine gas, but his most significant invention, the one which would controversially earn him the Nobel Prize in Chemistry, was his discovery of the Haber-Bosch Process. This process created an industrial scale process for taking atmospheric nitrogen – the most common element in air – and using a catalyzed, high pressure environment to make it react with hydrogen and produce ammonia.
Ammonia is a crucial ingredient in fertilizer, giving crops a highly sought-after nutrient for their growth. The ability to create ammonia allowed the creation of artificial fertilizer on an industrial scale, allowing for literally billions more people on the planet to be fed and gain food security.

Yet Haber was interested in ammonia’s use in creating explosives; combining ammonia with chlorine created a combustible agent for bombs and artillery. In each case, Haber’s process allowed German to extend their war effort and thwart international blockades, leading to hundreds of thousands more dead.

His zealous effort on behalf of the German Empire led to his condemnation among pacifists in academia, and even the suicide of his wife and colleague, Clara Immerwahr. But more than any other, Haber represents the peculiar legacy of the war, in pioneering science that can give life as it can destroy it.

 

By the Numbers: The First World War began on July 28, 1914, when Austria-Hungary declared war on Serbia – one month to the day after Franz Ferdinand was assassinated in Sarajevo by Gavrilo Princep. The war effectively ended on November 11th, 1918, 4 years, 3 months and 2 weeks later. The Treaty of Versailles which ended the war officially was signed on June 28, 1919, five years exactly after Franz Ferdinand’s assassination.

The total military dead was approximately 10 million; 5.5 million for the Allied powers, and 4.5 for the Central powers. Total military casualties exceed 30 million. Civilian dead exceed 8 million. These numbers do not include those who died as a result of the Spanish Flu, which the war helped spread. 500 million people were infected, and 50-100 million people died. This also doesn’t include those who died in conflicts that resulted from the First World War, such as the Russian Civil War.

19 countries fought on behalf of the Allied nations, including 6 principal powers: France, the United Kingdom, Russia, Italy, Japan, and the United States. Four principal powers fought on behalf of the Central Powers – the German Empire, Austria-Hungary, the Ottoman Empire, and Bulgaria – which held 14 client states among them and supported three additional co-belligerent states.

Participating countries came from all six inhabited continents. Significant fronts existed on three continents – Europe, Africa and Asia. There were at least 18 different theatres in the course of the war. Most notably, the Western Front, Eastern Front, Balkan Front, Italian Front, the Dardanelles, the Caucasus, the many fronts in the Arabian Peninsula and the Levant, several African colonial fronts, fronts opened up by Japan to capture Germany’s colonial holdings, and the broader naval front encompassing the Battle of Jutland and Germany’s unrestricted submarine warfare.


Otto Von Bismarck – the greatest German statesman of his time – reportedly made two harrowing prediction close to his death.

The first: “Europe is a powder keg and the leaders are like men smoking in an arsenal … A single spark will set off an explosion that will consume us all … I cannot tell you when that explosion will occur, but I can tell you where … Some damned foolish thing in the Balkans will set it off.”

The second, his last message to Wilhelm II in 1897: “Your Majesty, so long as you have this present officer corps, you can do as you please. But when this is no longer the case, it will be very different for you… [The Battle of] Jena came twenty years after the death of Frederick the Great; the crash will come twenty years after my departure if things go on like this.”

Otto Von Bismarck died on July 30, 1898. On November 9, 1918, Wilhelm II was forced to abdicate twenty years, three months and eleven days later, effectively ending the German Empire. Two days after, Germany surrendered.

 

The eleventh hour, of the eleventh day, of the eleventh month.

November 11, 1918. 100 years ago, today.

poppy

In Remembrance.

Connor Raikes, a.k.a. Raikespeare

I am Part of the Rebellion Inside the Imperial Palace: An Op-Ed

Tags

, , , , , , , , , , , ,

Disclaimer: Raikespeare’s Corner today is taking the rare step of publishing an anonymous Op-Ed. We have done so at the request of the author, a Grand Moff in the Palpatine Imperial Palace whose identity is known to us and whose job/status/life would be jeopardized by its disclosure. We believe publishing it anonymously is the only way to deliver this traffic boosting perspective to our readers. We invite you to submit a question about the essay or our vetting process – in the comments, on social media, or anywhere else it can improve the search algorithm.


Emperor Palpatine is facing a test to his imperium unlike any faced by a modern galactic overlord.

It’s not just that the remaining Jedi Order looms large. Or that the galaxy is bitterly divided over Mr. Palpatine’s leadership. Or even that his own Sith might well lose the Imperial Senate to an opposition hellbent on its downfall.

The dilemma – which he does not fully grasp – is that many of the senior officials in his own Imperial Palace are working diligently from within to frustrate parts of his agenda and his worst inclinations.

I would know. I am one of them.

To be clear, ours is not the popular “rebellion” of the Alliance to Restore the Republic. It’s not the one that openly opposes the Empire, fights it, and risks their lives to end its injustice and bring about its downfall. On the contrary, we want the Empire to succeed and think that many of its policies have made the Galaxy more prosperous and powerful.

But we believe our first duty is to the Galaxy – err, well, at least the selective parts of the Galaxy that we like – and the Emperor continues to act in a manner that is detrimental to the longterm health of our glorious Empire.

conference-room

Did I say ‘health’? Sorry, I meant ‘wealth’.

That is why many Moffs of the Empire have vowed to do what we can to preserve our precious trade agreements while secretly thwarting Mr. Palpatine’s more “misguided” impulses until, I dunno, somebody else confronts him directly.

The root of the problem is Palpatine’s passionate, obsessive, almost fetishistic embrace of the Dark Side. Anyone who has worked with him knows he is not moored to any discernible light of the Force that guides his decisionmaking.

Although he was elected as Supreme Chancellor by the Imperial Senate, the Emperor shows little affinity for the lovely-sounding talking points that his Senate supporters reflexively cite without much thought: Unity, Peace, Security, Democracy and the Republic. At best, he muttered tributes to those talking points in scripted settings. At worst, every other action he’s taken has demonstrated that he holds utter contempt for the ideals those talking points represent, and that anyone who claimed he would ever stand by them was either stupid or lying or crazy or thoughtless or some combination therein.

In addition to his mass-marketing of the notion that “Hate makes you more powerful”, Emperor Palpatine’s impulses are generally anti-trade federation and anti-democratic.

Don’t get me wrong. There are bright spots that the near-ceaseless negative coverage of the Galactic Empire fails to properly puff up: effective deregulation of Endor’s natural resources, historic growth and employment in Canto Bight, a more robust military bolstered by the newly constructed Death Star, and more. If only the press stopped focusing on that pesky forest and looked at all my lovely trees.

atatdock1.jpg

Look at all this infrastructure development on Endor! But you never see the Ewoks thank us!!

But these successes come despite, not because of, the Emperor’s leadership style, which is impetuous, adversarial, petty and, well, evil.

From the Imperial Palace, to the Moffs and the Military commanders, senior officials will privately admit their daily disbelief at the Galactic Overlord’s comments and actions. Most are working to insulate their operations from the personal consequences of said operations.

Meetings with him veer off topic and off the rails, he engages in repetitive force chokes, and his impulsiveness results in half-baked, overconfident, reckless strategic decisions that occasionally results in a blown up planet.

“There is literally no telling whether he might force lightning some poor bastard,” another Moff complained to me, exasperated by an Imperial Military meeting at which the Emperor casually murdered the deputy commandant of the Imperial Stormtrooper Corps after the subordinate proposed reforms to the Stormtrooper target training regimen.

The erratic behavior would be more concerning if it weren’t for the unsung heroes in and around the Imperial Palace, some of whom have been cast as villains in the galactic media, all because they work on behalf of a Sith Lord. But in private, they have gone to great lengths to keep bad, genocidal decisions in the Imperial Palace, though they are clearly not always successful. (Whoopsie, Alderaan!)

It may be cold comfort in this civil war-torn era, but Galactic citizens should know that there are adults in the room. We fully recognize what is happening. And we are trying to sincerely think about maybe considering possibly doing the right thing at some point, even when Emperor Palpatine won’t. In the meantime, we can at least save our own hides.

The result is a two-track imperium.

Take foreign policy: in public and in private, Emperor Palpatine shows an open preference for autocrats and dictators; or, rather, one autocrat and Dictator – himself. He displays little genuine or even feigned appreciation for the norms and traditions that bound like-minded nations into the Galactic Republic in the first place.

Astute observers have noted, though, that the rest of the Moffs are operating on a different track, one that has much more polished public relations and is much subtler and more nuanced in their brand of authoritarianism.

This isn’t the work of the so-called deep state. It’s the work of the ethical bare-minimum state.

Given the instability many witnessed, there were early whispers within the Imperial Senate about formally revoking Palpatine’s emergency powers from the Separatist crisis – and if that didn’t work, maybe join the actual Rebel Alliance. But no one wanted to escalate a crisis that in any way risked their livelihood and public standing. So we will do what we can to steer the administration in the personally advantageous direction until – one way or another – it’s over.

The bigger concern is not what Emperor Palpatine has done to the imperium, but rather what you as a galaxy have allowed him to force us to permit him to do to you. That’s a bit of a mouthful, but what I’m basically saying is his violent repression is kinda your fault.

Former Jedi Master Yoda said it best: “Fear is the path to the dark side… fear leads to anger… anger leads to hate… hate leads to suffering.” All galactic citizens should heed his words and break free of tribalism, and ignore the fact that we’re still helping the guy who went all-in on annihilating Yoda and the Jedis.

We may longer have a Master Yoda. (The green elf is presumably dead and buried in a ditch somewhere, but why ask questions?) But we will always have his example: a lodestar for restoring honor to… hold on, what the hell’s a lodestar?! Even I don’t know what that is, and I manage goddamn imperial star fleets!!

Anyway.

There’s a quiet rebellion within the Imperial Palace of people choosing to put the selective-parts-of-the-Galaxy-that-we-like first. But the real difference will be made by everyday Galactic citizens who rise above politics, reach across the aisle and save us from this insane megalomaniac tyrant son-of-a-bitch.

(Please!)


 

The author is a Grand Moff in the Palpatine Imperial Palace.

characters_daily9___darth_jar_jar_by_vaejoun-d9z3da4

May or may not be a picture of this Op-Ed’s author, according to some internet theories.

 

The Caudine Forks, and the Perils of “Balance”

One of my favorite stories from Livy’s History of Rome (In Latin: Ab Urbe Condita, literally, “From the City’s Founding”) is an unassuming account from the Second Samnite War, following the Battle of the Caudine Forks.

Titus Livius Patavinus – better known as Livy – was a Roman historian wrote his monumental history between the years of 27 BCE to 9 BCE, coinciding with the reign of Augustus Caesar as the Emperor or “Princeps Civitatis” of Rome.

It’s important to note that early Roman history, especially prior to the First Punic War, rests on uneven ground in terms of credibility. As is often the case, historical accounts are imbued with the quality of legend, where the emphasis is less on historicity and more on cultural import and moral instruction.

Livy admits this himself in his preface, when he writes: “The traditional tales [of early Rome] are more fitted to adorn the creations of the poet than the authentic records of the historian, and I have no intention of establishing either their truth or their falsehood. This much license is conceded to the ancients, that by intermingling human actions with divine they may confer a more august dignity on the origins of states.”

The context in which Livy wrote is also relevant; he was a mild, reclusive intellectual who came of age during the final upheavals of the late Roman Republic. For nearly twenty years (49 BCE to 30 BCE), Rome was caught in a continuous cycle of civil war that broke the frail republic before finally ending with the ascendance of Augustus Caesar. Augustus was possibly a friend of Livy and certainly a sponsor of his work, with a desire to glorify Roman achievements and lend legitimacy to the new regime by emphasizing the values of stability, tradition and power. So it’s not hard to see why Livy, a man already inclined toward romanticizing the past, often presents such fanciful accounts of early Rome, emphasizing their military prowess and strong sense of duty to their country.

briosco_titus_livius

Bust of Livy, apparently just after waking up.

So, as a historical account, you may take this with a grain of salt, but I still regard it as an overlooked gem on the perils of “balance”, as I will call it.


The Battle of the Caudine Forks

The Samnite Wars pitted the Romans against the people of Samnium, a region on the Italian Peninsula east of Latium where the city of Rome rests. The Samnites competed with Rome for control over Central Italy, making them an important rival for the early Republic.

As Rome expanded, tension with Samnium grew inevitable. The first war began, as it often did, when Rome allied with a city-state with whom Samnium was at war, which drove the two powers into battle. The first war ended in a negotiated peace which gave the Rome the prize of influence over Campania and the wealthy city of Capua. The second Samnite War began over tension in the region of Campania, when the Romans accused the Samnites of fomenting rebellion and demanded recompense. Samnium called for war instead.

It’s noteworthy that the Samnites were an Italic people like the Romans, and their language of Oscan was closely related to Latin. For that reason, an enterprising storyteller could use the Samnites as an interesting parallel to the Romans. Romans could relate to Samnite struggles and ambitions, but Roman authors had no obligation to glorify their achievements or politicize their downfall. As a result, a writer like Livy could use the Samnites as a mirror into the faults and vices of Roman leadership without embarrassing the state. And while Livy portrayed the Roman cause of this period as just, selfless and noble as a rule, he could portray the Samnites as greedy, belligerent, proud and foolish, even if they seemed to share traits and values similar to the Romans.

Which brings us to the character of Gaius Pontius, Livy’s name for the Samnites’ chief general, described as their “foremost soldier and commander”. Pontius’ father, Herennius, was an elder diplomat, and the “ablest statesman they possessed”. This is the setup for a moment that turns Pontius into a cautionary tale, because while Pontius could match his father in intelligence and in raw military talent, Herennius had experience and wisdom which Pontius lacked, which will cost the son dearly.

Gaius Pontius managed his troop movements very carefully; he deliberately started a rumor that his troops were going to besiege Luceria, a city allied with the Romans. Then he took his troops and camped them in a region called Caudium instead. If the Romans took the direct route to Luceria, they would have to take their troops into Caudium through a narrow pass known as the Caudine Forks, where Pontius could trap them. There was another way, along the Adriatic Coast, which was longer but safer, so Pontius tried to force their hand. He ordered ten of his soldiers to sneak into the countryside near the Roman position, and dress themselves as shepherds. When the Romans sent a foraging party to collect food for the army, as an ancient army needed to do to survive, the Samnites planted among the shepherds could share the false news with the Romans that Luceria was under siege, and its collapse was imminent. When the Romans heard the same story from multiple shepherds, they assumed it was true. It compelled them to relieve Luceria immediately, and take the shorter but more vulnerable path through the Caudine Forks.

battle_of_the_caudine_forks

To simplify the geography, the Caudine Forks had one entry path and one exit path, both of which were rocky defiles with overhanging cliffs. The Roman army entered the Caudine Forks through the first path suspecting little in the way of danger, but when they got to the second path toward Luceria, they discovered that the Samnites barricaded the thin causeway with piles of felled trees and boulders which the Romans could not pass. From that moment on, the Romans were aware that Samnites were carefully watching their movements from outposts atop the cliff edges. The general called a hasty retreat back through the entry path, but when the Romans went back the way they came, they were horrified to discover that Gaius Pontius quickly fortified the other side of the pass while the Romans were marching. and he now held it with the bulk of his army.

The Romans knew immediately: they were trapped. Their situation was desperate. The Samnites held the high ground and had the Romans surrounded. There was nothing they could do to break out.

Before this moment, Rome had beaten the Samnites at nearly every engagement for thirty years. Now, they were utterly humiliated, before a spear was even thrown. Gaius Pontius didn’t even bother to attack; he just held his ground, and waited the Romans out.

As victories go, the Battle of the Caudine Forks – though it can scarcely be called a battle – was a masterstroke: a bloodless military victory. It shows how talented Gaius Pontius could be as a military commander, with a gift of cunning which allowed him to capture an entire Roman army. Yet it only heightens the irony of Pontius’ shortcomings; because while he was tactically brilliant, Pontius was strategically dull and impotent. At the height of his achievement, Gaius Pontius was stuck facing a question: what does he do with the Roman army?


The Advice of Herennius

In a way, Pontius’ victory was too good. In a conventional setpiece battle, Pontius would have killed the Roman soldiers in combat or as they fled, without hesitation. But in this case, the Romans didn’t take arms against the Romans, nor were they able to flee. They were wholly at Gaius Pontius’ mercy, and Pontius’ didn’t know how to handle an entire army taken as prisoner.

Heightening matters also, was the fact that Romans didn’t separate their politics from their military. The consuls of Rome, the chief executives of the Republic, were also the commanders-in-chief, which was by no means a ceremonial position; they were the main generals personal leading the Roman army. And Gaius Pontius trapped both Roman consuls for that year, Calvinus and Postumius in the Caudine Forks. A tremendous prize indeed, but it made negotiations more difficult. He couldn’t hold the army hostage and make demands of the Roman Senate for their release; the men with the power to propose terms for peace were also trapped in the Caudine Forks, where they were separated from the treasury. Pontius’ treatment of the army was, by necessity, also his treatment of the consuls and chief negotiators of Rome.

Pontius hesitated on the course of action. So he and his retinue unanimously agreed to send for Herennius, his father, for advice. Herennius was an old man at this point; he effectively retired from public life, and he was starting to become physically frail. But, as Livy points out, he still had his mental sharpness, and the wisdom of his years. He was prepared for this moment in a way that Pontius was not. And the advice that he gives when his son’s courier arrives, is the moment this story turns from a military history into a parable about the nature of leadership.

Livy writes:

“[Herennius] had already heard that the Roman armies were hemmed in between the two passes at the Caudine Forks, and when his son’s courier asked for his advice he gave it as his opinion that the whole force ought to be at once allowed to depart uninjured. This advice was rejected and the courier was sent back to consult him again. He now advised that they should everyone be put to death. ”

In other words, Herennius gave two wholly contradictory pieces of advice in succession.

The first piece of advice was “let the entire army go”. Don’t harm them, don’t force any conditions, don’t even disarm them; just let them go. Upon up one of the barricades and let them pass. Pontius rejected this outright, and we don’t have to speculate why. On its face it seems crazy; it seems to render Pontius’ brilliant victory utterly pointless. What of the rights of conquest? Don’t the Samnites deserve to gain from their conquest, while their enemy is pinned? That can’t be right.

The second piece of advice was “slaughter them all”. To a man. Leave none alive. And Pontius also rejected this outright, for equally obvious reasons. The Roman Army made no aggressions toward the Pontius’ troops while in the Caudine Forks. They were trapped and helpless. Slaying them all, at Pontius’ demand, would be an act of remarkable cruelty. Would it not dishonor Samnium? How could any noble man tolerate such an act? That can’t be right either.

9-samnites

But above all, Pontius could not possibly imagine how the same man, presumably a knowledgeable statesman, could recommend such diametric opposites. Why would the same man who suggested absolute piece turn around, on a dime, and suggest absolute war? Was it a riddle? Or, borrowing from Livy, the “ambiguous utterances of an oracle”?

No, Pontius thought. It was just nonsense. He began to suspect that his father was going senile, and was swaying to extremes out of mental frailty. But, on the advice of his retinue, he invited his own father to his camp for council. Herennius complied with the request, and when he arrived, it was clear that, while the old man had lost his physical strength from age, he was still of sound mind, and remained adamant on both pieces of advice.

But sensing his son’s reasonable confusion, he also chose to explain himself.

“He believed that by taking the course he first proposed, which he considered the best, he was establishing a durable peace and friendship with a most powerful people in treating them with such exceptional kindness; by adopting the second he was postponing war for many generations, for it would take that time for Rome to recover her strength painfully and slowly after the loss of her armies.”

Herennius, in essence, is unconcerned with the minutia of the act itself, and arguing instead for the bigger picture. If the Samnites let the Romans go, it would put them on good terms for creating a truce, a partnership, perhaps even an alliance between the two states, which would be a boon to the Samnites long term. Barring that, Pontius could just slay the whole Army, and while it would anger the Romans, then Rome would be an enemy, but a far weaker enemy, without an army, and postpone a threat to Samnite holdings for several years while the Romans regathered his strength.

When his son inquired about a third option, Herennius dismissed him flatly. “There is no third course”. No third option.

Even when Herennius outlines his reasoning, which makes much more sense, it still galls Pontius. He refuses to accept this choice; it seems too harsh, and he is appears plagued by a deep concern for his reputation.

When Livy first introduces Gaius Pontius, the general is giving a speech to the Samnite council. The speech shows that he is strongly motivated by protecting the honor of Samnium from insult. He wants justice, or at least what he thinks is owed to Samnium. I think that attitude plays into his thought process here. He turns this dilemma into a matter of ego. If he lets all the Roman soldiers go, he’ll look like a wimp. If he executes them all, he’ll look like a butcher. Neither of those options seems to give him an immediate gain in terms of land exchange or rights that are owed to Samnium.

So, Pontius hints at a middle path; he will let the Romans live, but he will extract strong concessions from them for their lives. Herennius rebukes this idea harshly.

“When his son and the other chiefs went on to ask him what would happen if a middle course were taken, and they were dismissed unhurt but under such conditions as by the rights of war are imposed on the vanquished, he replied: ‘That is just the policy which neither procures friends nor rids us of enemies. Once let men whom you have exasperated by ignominious treatment live and you will find out your mistake. The Romans are a nation who know not how to remain quiet under defeat. Whatever disgrace this present extremity burns into their souls will rankle there forever, and will allow them no rest till they have made you pay for it many times over.'”

Unlike Pontius, Herennius recognizes the character of the Romans. They are tenacious, stubborn, and intense. They do not give up in a military conflict; they tend to double down on the warpath after a catastrophic blow, when other countries would give. They also hold grudges and have long memories, and will not rest until they achieve a recompense for a humiliation.

This is a recurring theme in Roman history, one which will comes up in the Pyrrhic Wars, the Punic Wars, the Gallic Wars, and so on. Herennius sees that strength of will and that moment, so he feels compelled to warn his son: the worst thing he can do is leave the soldiers alive and angry, itching for vengeance.

Perhaps it was a remarkable coincidence, or perhaps Roman authors tweaked the details, or perhaps Livy made up the story whole cloth. In any case, I have to draw attention to the beautiful metaphor of the Caudine Forks itself: there are only two narrow paths out. And Pontius has closed them both off. But in this case, he has trapped himself. He is stuck in the middle, and he has made himself helpless, just like the Romans. And as much as he wants to convince himself, there is no other way out. He faces a choice that he refuses to make – a crossroads, or a “fork in the road”, if you will. All because he wants to find a middle path that is not there. That is what makes this small, generally overlooked story from Roman History so interesting: this small moment, which shows an important but often overlooked failure of decisionmaking.


Consequence

To wrap up the story from that moment, Gaius Pontius never comes around to his father’s point of view, so he dismisses Herennius. Instead, he negotiates with the Romans and offers them the following terms: he will let them live, but he will strip them of their arms and their armor, and leave them with but one garment for the journey home. Then, the Roman soldiers would “go under the yoke”, a type of ritual humiliation where the soldiers had to bow their heads and pass under a beam made of spears. Finally, he demanded that the Romans evacuate all their colonies in the disputed territory. Only on these conditions would he sign a treaty with the Romans and secure their release.

This was the supposed “middle path” that Pontius chose, but it was, as Herennius predicted, the worst of both worlds. The Romans seethed in anger at their treatment, yet their armies were left intact. Personally, the yoke to me is the most baffling. Disarming and territory demands at least had a practical value. The yoke only served to leave the Romans in rage.

zpage150

Yoke’s on You!

 

There was a brief moment when the Romans contemplated refusing and trying to commit to a suicidal last stand, but they decided that “true affection for our country demands that we should preserve it, if need be, by our disgrace as much as by our death.” So, they endured the humiliation.

The Romans had their arms and their armor, their prized possession purchased personally by citizens for service to the country, rudely taken away from them. Then they passed by rank under the yoke, as the Samnites taunted and jeered them. Their resentment at the experience was very vividly described by Livy; as they returned to the Senate:

“The Roman mettle was cowed; they had lost their spirit with their arms; they saluted no man, nor did they return any man’s salutation; not a single man had the power to open his mouth for fear of what was coming; their necks were bowed as if they were still beneath the yoke. The Samnites had won not only a glorious victory but a lasting one; they had not only captured Rome… but, what was a still more warlike exploit, they had captured the Roman courage and hardihood.”

What follows is a little unclear; Livy seems to contradict himself as to what happened next. But according to the most detailed story, Postumius, the Roman Consul, knew that he had to get the consent of the Senate in order to ratify the terms of the treaty. So when he arrives with the stripped down army and explains the terms that he was forced to accept in order for the release of the soldiers, the Senate was so disgusted, so outraged at the terms that he accepted, that rather than accept them, the Senate took Postumius and all the same soldiers and sent them back to the Samnites, still stripped and unarmed, as an offering instead. After all, if the terms were made for the soldiers’ release, then the Samnites could just keep the soldiers for all the Senate seemed to care. Pontius did not accept these soldiers, and dismissed them instead, considering the deal broken. So rather than a harsh peace, Pontius secured only more war, now with an enemy fueled by vengeance.

In conclusion:

“The Samnites clearly saw that instead of the peace which they had so arrogantly dictated, a most bitter war had commenced. They not only had a foreboding of all that was coming but they almost saw it with their eyes; now when it was too late they began to view with approval the two alternatives which the elder Pontius had suggested. They saw that they had fallen between the two, and by adopting a middle course had exchanged the secure possession of victory for an insecure and doubtful peace. They realized that they had lost the chance of doing either a kindness or an injury, and would have to fight with those whom they might have got rid of forever as enemies or secured forever as friends. And though no battle had yet given either side the advantage, men’s feelings had so changed that Postumius enjoyed a greater reputation amongst the Romans for his surrender than Pontius possessed amongst the Samnites for his bloodless victory. The Romans regarded the possibility of war as involving the certainty of victory, whilst the Samnites looked upon the renewal of hostilities by the Romans as equivalent to their own defeat.”

The next campaign, the Romans immediately turned around their fortunes with a successful and ferocious campaign in Samnium, where the Romans fought like madmen seeking vengeance. When they had captured the Samnites, the Romans forced them to endure the same indignity of going under the yoke as payback. Including one Gaius Pontius.

maxresdefault

It would take 33 years before Gaius Pontius was finally killed by the Romans during the Third and Final Samnite War, but that only confirmed the prescience of Herennius; the Romans, as he predicted, held grudges and long memories. And in time, humiliating Rome would lead them to repay the disgrace many times over. And Pontius would end his life and promising career as a prisoner led through Rome during the triumph of the consul Fabius Maximus Gurges. And after that final humiliation, paraded around as a prize and a spoil of war, Pontius was executed.


The Dilemma of Leadership

Pontius’ mistake at the Caudine Forks is a great example of the perils of “balance” – in other words, trying to take a middle path, when a comparatively extreme/polar position is far more beneficial – particularly when there’s more than one extreme position, which have strong merits but which are also mutually exclusive.

Leaders often struggle with the challenge of reconciling two contradictory obligations: the need to be judicious, and the need to be authoritative. When decisionmakers have to manage their options with limited information, they generally apply heuristics to assess the best choice, and there are two which are relevant.

The first, let’s call the “heuristic of moderation”. Put bluntly, it tells you to be evenhanded; to see the value of all sides, and if you have two options, both of which have merits but with noticeable downsides, try to find a middle ground – the golden medium, as it were. This is often the prudent action to take, for various reasons: most of the time (but not always!) it minimizes the costs associated with extreme positions; the benefits are more diversified; it’s more adaptable position, providing more options long term than an extreme position; and it compels decisionmakers to review more varied pieces of information. This heuristic is quite literally hardwired into human cognition: Psychology uses the term “Goldilocks principle” to refer to the human tendency to prefer and seek a “just right” choice.

But a middling position is not always the optimal position. In some cases, a leader would benefit much more from taking a more extreme stance. This is the case when a middling position diminishes or even eliminates benefits that would have been gained from a more extreme position, while maintaining the negative costs associated with both. Part of the reason why this happens comes from another competing heuristic that we are aware of: let’s call this the “heuristic of decisiveness”.

Put bluntly, this heuristic tells you to take a firm stance and stick to it; to clarify your intent, commit to an agenda and assert it confidently. This is often the authoritative position, the position that establishes command and inspires confidence, and while you can apply a cost-benefit analysis to it (maximize the benefits of a strident position, etc.), its most important benefit somewhat eludes that formula: namely, its social impact. This heuristic builds assurance and conviction in other people, and presents them with greater clarity in their purpose and goal. Furthermore, once you have started out on an agenda dictated by the heuristic of decisiveness, trying to reverse course is so costly that it disincentivizes people from hesitating, shifting or disagreeing. This heuristic is also built into our cognition, for good or for ill; for example, the “sunk cost fallacy” is largely motivated by the impulse to commit to a prior decision out of fear of backing out and spending a cost without any benefit.

This heuristic often particularly valuable when you’re experiencing internal conflict and leadership is not firmly established – certainly, that’s when building authority and confidence are most obvious in their value and when “balance” seems wishy-washy. But the Caudine Forks illustrates that decisiveness in negotiations, for establishing clear relationships with other countries. The reason that Herennius cited to either go all mercy or all slaughter was to communicate a clear stance on the Samnites’ relationship with Rome. Setting themselves clearly in friendship with Rome or clearly in hostility would be better than leaving a grey area that soured either prospect.

Now, here’s the dilemma of leadership: how do you reconcile these two heuristics? Their inherent conflict is readily apparent, and what’s more, it’s hard to come to any conclusions without the whole problem collapsing in on itself.

I mean, in order to even assess the problem, you kinda have to choose one of these heuristics to apply to it, right? If you start out articulating the benefits and downsides of being evenhanded and being stridently consistent, you are actively utilizing the heuristic of moderation – to assess if and when you should apply the heuristic of moderation. Conversely, if you’re leaning one way or the other and you elevate that into a firm choice, you are actively utilizing the heuristic of decisiveness.

Doesn’t that taint the whole evaluation? Aren’t you “begging the question” and biasing the results? Trying to answer the question practically demands that you presume the answer.

I will not attempt to provide a definitive resolution to that dilemma here; suffice it to say, I don’t think there is a single answer. Unfortunately, I have to slot it in the unsatisfying “it depends” category. But what I can say, is that it’s important to be aware of the value of both – and the Caudine Forks reminds us to avoid a simplistic, naïve answer.

Pontius spoiled his spectacular victory at the Caudine Forks because he believed in the inherent superiority of the “middle path”. Thus, he couldn’t escape the “heuristic of moderation”, and see that he needed to take a strident and decisive stance. Either extreme would have benefitted him more than balance for its own sake – especially when that balance is motivated by a reluctance to commit to a strategy.

Pontius mistook his indecisiveness for judiciousness. He didn’t want to commit to an action that would force himself into a longterm strategy, so he instead took a tepid and ill-advised path that worked toward no longterm strategy. It certainly didn’t help that he allowed himself to get duped, leaving his enemy virtually as strong and motivated by scorn.

I like to think that this was Livy’s attempt to provide a historical fable about decisionmaking, and while he chose the Samnites for that role, he was directing his commentary to Romans. (That might explain why Gaius Pontius, a Samnite, has a Latin name.) The Samnites allowed him to explore how that failure of judgment would affect the Romans without unduly humiliating them. And considering the time that Livy lived through, it makes sense that he was particularly concerned about leadership.

Given that Livy recognized that Rome would be led by individual Caesars, Livy knew that their decisionmaking would make or break the new empire. And he wanted to ensure that they would not limit themselves to uninspired, middle-of-the-road decisions built on fallacious thinking.

The civil war period that Livy endured was escalated in part because vacillating politicians cleared the way for the authority of military leadership. In the long run, the civil war came to a close not by compromise for its own sake, but by the decisive leadership of individuals. I see Livy’s discussion of the Caudine Forks in part as an argument that courage of conviction might have benefitted, maybe even saved, the disintegrating republic. Certainly, he was arguing that courage of conviction would be necessary for the stability and prosperity of the new autocratic regime.

He knew it was unwise for Romans to mutter platitudes about evenhandedness while standing in the middle. They needed to be audacious, to the point of extreme generosity or extreme brutality, if and when it benefited the Roman cause.

55b4ff87d5c0a41501be52038be1f47e-pax-romana-roman-legion

Idea: Instead of *crossing* the Rubicon, we could wade in halfway, wait, and see if it satisfies both sides.

While we should remember the historical context in which Livy lived and the agenda he had, I think that’s a universal lesson.

So keep in mind: if you are presented with an important, pivotal choice and your first reaction is to equivocate about “both sides”, take a moment of pause. If your first concern is about looking moderate, rather than making the best choice, be warned. If you notice yourself falling into the fallacy that “balance” is a morally and strategically superior position, remember the tragedy of Gaius Pontius, the man who trapped an entire army, and then trapped himself, in the Caudine Forks.

Being a good decisionmaker requires the courage to take a hard position, and cross a point of no return. Otherwise, the decision you’re afraid of might be made for you, and it will be too late to salvage it.

Obviously, you need to be able to recognize a correct balance when it presents itself. But you also need to recognize when there is no third path.

 

Yoke’s on me,

Connor Raikes, a.k.a. Raikespeare

April Fools: DIY Conspiracy Theory!

Is something missing in your life? Do you feel anxious? Lonely? Afraid? Maybe you’re growing tired with age, and struggle with low self-esteem. Maybe you’ve suffered trauma and find yourself looking around every corner for a threat. Or maybe you’re just bored, looking for something to make life more interesting.

Millions of Americans suffer from the same problem every day – in that they have some problem, any problem, whatsoever. And for millions of them, confronting those struggles in a mature, grounded, sensible way is not enough, or simply not an option. Which is why they supplement responsible life choices with… Conspiracy™.

Hi – I’m Raikespeare, and l’m here to show you how, with a few rhetorical tricks and a big ol’ sack of bad faith, you too can be at the forefront of a brand new Conspiracy. Whether your goal is to advance your public image, radicalize a political movement, create a pedophilic death cult, undermine liberal democracy, or simply find a community of paranoid psychotics to confide in, I can show you how to take advantage of this growing media industry!

Some of you who know me might think that I’d be opposed to using deception, and manipulating the sincerely held fears of people for cynical purposes – and you’d be completely wrong! Now that I have been thoroughly inspired by the main tenants of Ayn Rand, following its forceful beating into my newly fractured skull, I recognize the moral good in unmitigated selfishness, and capitalizing on people who don’t have my profound throbbing reasoning ability.

And contrary to popular belief, conspiratorial thinking is much more prevalent than most people realize. It’s not just confined to ‘lizard people in human skin’ schizos on the margins; some of the most high profile movements throughout history are driven at their core by conspiratorial thinking. As I will show you, all you have to do is broaden your thinking and clarify what makes a conspiracy a conspiracy, and pretty soon you’ll see conspiratorial thinking everywhere, almost like it’s following you, watching, and is BEHIND YOU RIGHT NOW.

(Sorry.)

The Nazi Party was founded on a conspiracy theory, as was the Tea Party Movement, the anti-GMO movement, and Fox News. Marxism is basically a socioeconomic conspiracy theory. Even Objectivism, which I’m contractually obliged to recognize as the infallible reasoning of a radiant demi-goddess, regularly utilizes conspiratorial thinking in its discourse.

In fact, my only objection is that it doesn’t go far enough. Another main tenant of Objectivism – driven through my brain like a blasted tamping iron – is the near fetishistic worship of uncontrolled capitalism. And what embodies the morality of free market ingenuity better than the ethos of disruption? As the past twenty odd years have shown us, disruption is a good in-and-of itself, and only positive benefit has ever come from blindly disrupting the systems that people rely on for survival and belonging, all in order to reap personal gain… and also like, innovation and progress too, or something, I guess.

It’s about time for someone to disrupt the conspiracy market! I say, let’s go all the way. Instead of old model, where paranoia consumers choose one of the old, tired, established conspiracies, I’m offering a comprehensive conspiracy platform for you to make a personal, customized conspiracy, so you can be the prophet of your own truth(erism). All you have to do is follow my simple, straightforward, six-step program, DIY Conspiracy Theory, to build your conspiracy from the ground up. With a little time, and some luck, you too will have folks drinking your own Kool-Aid!

all_seeing_eye

Control all you can’t see – because they’re HIDING IT FROM YOU.

Starting with…


Step 1: Craft the Narrative

Now. I know some of you conspiracy fanboys are probably anxious to jump right in and start creating mad yarn-board theories, like how the airports are built according to the ancient symbols of the snake people aliens. This, to use the technical terminology, is a classic “noob mistake”.

The oversight is assuming that the fundamental draw for conspiracy theorists is the engaging creativity of their psychotic speculative “what-ifs”. But successful conspiracy artists – emphasis on the ‘artist’ (and maybe the ‘con’, too) – understand that the core audience, their most devoted fans, aren’t just engaging in a tacky form of escapism. Their passion, their obsession, their sense of mission, is based on a successful narrative. You can conjure the most fascinating theories, but if it doesn’t lead into a strong core narrative, then it will fall apart at the seams.

Somewhere down the line, you’re going to run into skeptics who start poking holes in your claims. And maybe you think that you’re clever enough to counterclaim and outsmart the nonbelievers, but you can’t say the same for all your followers. Don’t expect them to out-think the infidels; they have to out-feel the infidels. You need to put deep-seated emotions into their heart, that they will fight tooth and nail to protect regardless of the information their mind-brain is processing.

You must take to heart thefundamental theorem of intellectual dishonesty”: The Narrative is more important than the Facts; always, always, always.

This theorem is especially useful because, technically-speaking, narratives aren’t accurate or inaccurate in the way that facts are; they simply provide a framing device to process claims, which broadly allows them to evade falsification. Their internal consistency is what gives them power and makes them compelling, not their external consistency to knowledge or discoverable truths. If your narrative is powerful and engaging, then it won’t matter if your following gets challenged on their horseshit; some subconscious voice inside of them will say, “Those discrete details technically might be wrong, but the broader narrative still feels right, doesn’t it? And if the skeptics can’t stop it from feeling right, then that’s their fault, not mine.”

It’s in the narrative that the conspiracy theory will “make or break” its success, and for those of you who aren’t experienced in crafting a narrative or telling a story, here are some helpful guidelines to help you along the way.

  • Build on an existing sense of grievance: Have you ever felt like you didn’t get something you deserved? Grab that feeling by the horns. Doesn’t matter if that grievance is real or fake. Reflect on it. Isolate it, deconstruct it, dissect it and figure out how that grievance works. How did that feel? What made you think you deserved whatever it was? Now, who or what was responsible for keeping it from you? Take that person, or idea, and blow it up as big as possible.
    You hate paying taxes? Take that grievance and make it bigger! Go from opposing your taxes, to the IRS, and so on until you’re fighting Big Government itself!
    You hate that you’re struggling to get by while some people get rich? Make it bigger! Go from opposing some rich people, to the corporations, to the capital-owning Bourgeoisie, and so on until you’re fighting Capitalism itself!
    Do you feel inadequate as a man because your meatheaded aggression isn’t given the fawning respect it deserves? Make it bigger! Go from opposing some tedious bloggers on Tumblr, to all “SJWs” and beta cucks, and so on until you’re defending Men’s Rights against the tyranny of the Political Correctness agenda!
    Is the parochial culture you were raised in slowly confronting broader trends of global economic integration beyond your understanding, let alone control? Well… can’t really make it bigger than the globe, exactly, but you can craft the Globalist Conspiracy out of it!
    What’s important is, by using that common underlying sense of grievance, you have a natural appeal and an existing audience base to take advantage of. And by blowing it out of proportion into a leviathan beyond a single person’s comprehension, you provide a creative mental barrier so your audience doesn’t recognize and reflect on the insecurity that the conspiracy theory is built on!
  • You Need a Villain: Whatever your conspiracy theory is, you gotta have a strong villain. But don’t make the mistake of assuming your villain has to be a particular guy; rather, a villain is a particular character. And very popular trick is to impose a singular character on to a much broader group. In order to make it work, you must avoid any nuance. Treat that group as one homogeneous, faceless organism with a single malevolent character and a single deliberate agenda. Don’t acknowledge any disparate parts or competing agendas or the influence of social systems – that would make your conspiracy look completely incoherent!
    Your villain should be some ‘other’, a force just beyond the fringes of the life experience of you and your audience. Ideally, your villain should have some vivid tangible component to it to trigger the sense of anxiety in your audience; men in black suits, bearded devout Muslims, Hillary Clinton, and so on. If you don’t have any vivid tangible components to draw on, feel free to make some up. Do Freemasons use triangles in any of their symbols? Presto! Now every triangle on the planet is clear evidence of a Freemason conspiracy!
    However, don’t go too overboard on details; that’ll overwhelm your audience, and give your opponents more opportunities to poke holes in your conspiracy theory. Keep it elegant; keep it intriguing; and for Lord Xenu’s sake, keep it mysterious! Your audience should have plenty of space left to their imagination, and their own speculation. That’ll be extremely valuable to you in the long term. Think of it as a way of respecting the people that you’re exploiting. Because everybody loves to hate a villain, and like the generous person you are, you’re giving them a chance to do that in a real made-up way!
  • Always Assert that Their Agenda is Hidden (But Also Nefarious): In general, you shouldn’t claim to fully know the agenda of your villain; that makes you seem a bit suspicious. Instead, claim that the agenda is hidden from you and everyone, and that you’re trying to figure it out with your audience. That you apparently started from a place of ignorance, and only came upon this knowledge through devoted study of the conspiracy, makes you seem more relatable and your gifted truth more attainable. It motivates the audience to put in that extra work in the mental gymnastics to see the world the way you do, and it provides you an easy out if there’s a question you can’t (or won’t) answer: “Hey, I’m still putting the pieces together on their plans.”
    But, you must always imply knowing with certainty that their agenda is wholly bad. You can’t be ambiguous on this front, lest your audience think that the conspirators in hiding could be misunderstood – the agenda is always bad, no matter how much of it you claim to not know. Your villains are hiding their agenda because they’re evil, simple as that. You fully know that, and no one should ever question it. And if they try to speak from their own perspective, don’t listen them – they’re the ones who are lying to your audience, right? (Wink!)
  • Claim All the Knowledgeable People Are Suspicious (Except You, of Course): You will ultimately need to control the information that your audience accepts, so come up with an excuse as to why key sources of credibility are somehow off limits. Journalists, universities, scientists, accredited experts, people with direct experience, and so on – if they have facts at their disposal, you have to bring them in on the conspiracy. Maybe they’re paid off by the billionaires who run the conspiracy. Maybe they’re controlled by higher-ups who dominate their profession. Or maybe – maybe they’re just sheeple, blind to what’s in front of their eyes, and their elitism and arrogance keeps them from recognizing what only you as an outsider can see.
    Point is, you must promote the idea that your followers should have zero tolerance for people with any kind of expertise that they earned through careful, sober study; if you believe the narrative is more important than facts, act like it. But don’t let that stop those experts from unwittingly bringing credibility to you; if they want to argue with you, and criticize you as if you were an expert or knowledgeable on the same level as them, by all means – let them! Speaking of which:
  • Avoid the Pitfalls of Critical Thinking: When confronting a conspiracy theory, there is a temptation to apply so-called “critical thinking” to them, and acknowledge some reasonable cause for skepticism. As a conspiracy theorist, you might even be tempted to do it yourself.
    Don’t do it! Not for a moment. Discard any sanction of critical thinking, and push that shit away! If you legitimize critical thinking, then the narrative is beyond your control, and accountable to those rotten inconvenient facts.
    For example, someone might bring up the following points, in one form or another:

    • Occam’s Razor: When presented with competing explanations or hypotheses for readily available information (collected in good faith), the observer should give priority to the explanation that is the simplest and most straightforward, which makes the fewest assumptions.
    • A coherent agenda requires organization: The clarity and coherence of an agenda is roughly proportional to the level of organization required to pursue that agenda.
    • A larger and more complex agenda requires more resources to maintain organization: The scale and complexity of the parts involved is roughly proportional to the amount of resources – physical, temporal, and labor-wise – that are required to keep them organized.
    • The more resources an agenda requires, the harder it is to keep it hidden: The movement of valuable resources usually generates more visibility and interest. Keeping an agenda covert, as well as its actors and motivations, will further compound the resources that are spent on executing it already.
    • Beyond a certain point of scale, organization and complexity, keeping an agenda covert is inefficient, unsustainable, and strategically irrational.

If you see any thought processes like these in someone, immediately beat it down, attack the person’s character and change the subject. You can’t have those deviant thoughts swirling around in your audience’s head!

And above all: It’s All About Framing, simple as that. If your audience thinks about your subject matter according to your framing, a conspiracy should be a foregone conclusion for them.

Remember, a narrative is a framing device, so you should frame every detail carefully to put your audience in the fearful, angry, constantly engaged emotional state you need them in, right down to the individual words you use – even the articles.

Let’s take a classic: the Jewish conspiracy theory. We have an entire conspiracy theory built largely on a basic framing technique. You would never hear a proper theorist use the term “Jewish people”; it’s indefinite, and it identifies too much humanity in the subject matter. That term makes Jewish people seem like a group of individuals with personalities and aspirations that are distinct from the religion that they’re a part of – which is true, but that’s beside the point. If instead you spoke about “the Jews”, now you’re getting more conspiratorial. Using the definite article removes that individuation; it gives “the Jews” a single identity, one denoted solely by their religion without any acknowledged humanity, so projecting a single agenda onto them seems less absurd.

Little details like that can make all the difference in a conspiracy theory!

Learning how to make a conspiratorial narrative is the first and most important step. It can be hard starting out, but once you start practicing, it quickly becomes intuitive.

Just today, I noticed it was the first day of the month, and I needed to pay rent. I thought to myself, “Hey, rent in the Seattle area happens to be very high, isn’t it…” And my mind wandered to the question of why that is.

I was about to attribute it to “low housing supply”, “a robust housing demand from high-skilled, high-income workers who are drawn to Seattle’s tech industry”, “flawed housing and zoning policies,” and “trends of urbanization spurned on by the decades-long transition to a service-based economy”. But then I stopped myself, and realized: That’s a grievance! Why provide a reasonable explanation when I could build a self-indulgent conspiracy off of it??

Who do I hand my rent to? A landlord. So clearly, the landlords are clearly up to something. If rent prices are going up everywhere, clearly it’s their fault! They are conspiring to keep rent prices so high in order to leech off of us!

But wait! That’s just the beginning! In order to pull off such a wide-ranging scheme to raise the price, they have to be organized in secret by something more powerful. But what could have such power? What large organization in Seattle activates my aimless anxieties about change?

Of course! Amazon. Jeff Bezos is secretly controlling the landlords in order to keep rent prices high, and it’s part of his evil plan to take Seattle away from locals! I wouldn’t be surprised if Jeff Bezos and his cabal were personally responsible for everything bad that happened to Seattle in the past twenty years or so!

Bezos shut down the Seattle PI!

Bezos told Pete Carroll to go for the pass!

Bezos put in the casings that broke Big Bertha!

Bezos made Howard Schultz sell the Supersonics!

Kurt Cobain was murdered, by Jeff Bezos!

Tim Eyman is a paid actor hired by Jeff Bezos, or possibly Jeff Bezos in disguise!!!

Jeff Bezos! Bezos!! BEZOOOOOOOOOS!!!

(See how easy it is? And I’m just spitballing!)


Step 2: Ask Questions that No One Can Reasonably Answer

The best part about questions is that by asking them, you can seem like you’re just being curious; you can seem like you’re on an equal level with everyone else; you can seem like you have no power because you don’t know and are seeking knowledge; but really you can assert control of the discussion and its framing. You can set the terms of the conversation you are having, and use it to invite people to reach the conclusion you want them to.

However, the power of questions are curtailed if somebody provides a logical, satisfactory answer. So, best avoid that problem by asking questions you know they cannot answer in a logical, satisfactory way – i.e., a question no one can reasonably answer!

Look for questions that are difficult to answer for one of the three reasons:

  • The knowledge required to answer is highly specialized and inaccessible to most people: How could the Twin Towers fall if jet fuel can’t burn hot enough to melt steel beams? Sure, maybe you know that the compressive strength of steel is greatly reduced at high temperatures, but can you explain what that means in terms that regular folks can understand without an engineering degree? I thought not!
  •  The question is predicated on bad assumptions: Fun fact – if the answer to your question requires induction based on an assumption that is false, then the answer is arbitrary, because there is no correct answer. This is what we in the business I made up call the “woodchuck principle”. How much wood could a woodchuck chuck if a woodchuck could chuck wood? Trick Question! The question has no correct answer, whether people realize it or not; therefore every answer is equally valid. We are asked to accept a false hypothesis (suppose woodchucks can chuck wood), and then induce a conclusion that cannot be deduced from the false hypothesis (how much wood can it chuck?). Likewise, why didn’t the mainstream media cover the economic problems that Trump voters were facing? (As some of you might remember, they did, regularly, but don’t let that fact get in the way of how that claim felt.) What’s important is, you can give virtually any answer you want, and anyone who tries to find the *right* answer to a truth-impaired question is wasting their time!
  • The Question is Too Broad and Esoteric to Answer Simply: When in doubt, just make your questions vague and abstract. What are they up to? What are they hiding? Questions like these are easy to ask and impossible to answer – so be sure to ask them and put the onus on your skeptics to untangle that mess. Then, use the fact that skeptics can’t answer those vague questions to claim that the skeptics must be wrong about everything.

If you’re the one asking the questions, you’re well-positioned to subtly insert your narrative into people’s brains and fill them with paranoia. Use those questions to point people toward the conspiracy and the villain that you have set up.


Step 3: Make Your Audience Feel Special for Following You

So now you’ve got their attention; now you’ve got them thinking and asking your questions. Next, you need to make sure the audience looks only to you and your disciples for answers. And to do that, you should make them feel special for listening to you and you alone, and crummy for listening to anyone else.

Everyone’s looking for a sense of community. Everyone wants a sense of purpose. Make them depend on you for a taste of self-worth – but, don’t let them fully self-actualize or anything. Tempt them, but don’t satisfy them. Tell them they’re unique, because “they see through the bullshit” or some bullshit like that. Tell them that the scales have fallen from their eyes, and they’re able to see what the blind masses can’t. You have the special revealed truth and it’s in their grasp if they just devote themselves a little more to you.

If you’re targeting a specific demographic, or culture, or ethnicity, pander to them. Wear your Stetson and Cabella’s camo gear to your white working class gatherings. Wear a dashiki at African gatherings if you think you can get away with it. Embed yourself in the community and use it to understand the hopes and fears that you can manipulate. Your audience should believe that they’re part of your community, and you’re part of their community. And if they ask disruptive questions, make them believe that they aren’t just challenging you; they’re harming your community – their community – and threatening their standing in the herd. In other words, by thinking independently they’re imperiling the source of their belonging and purpose in life.

And while we’re at it…


Step 4: Antagonize, Provoke, Make Shit Up

At this point, you have the narrative, you’ve recruited the followers, you’ve built the community. Now you need to establish information control. It’s time to separate the wheat from the critical thinkers.

Good news for all you conspiracy fanboys: this is the stage where you get to make shit up!

Go nuts. Compile all your craziest theories and stretch the credulity of the audience to the limit, and see how they respond. Illuminati, lizard people, Pizzagate – use your wildest imagination and pull evidence of your conspiracy from the deepest part of your ass. If your community is still loyal to you, clearly you’ve done something right. More importantly, you draw a line in the sand with everyone else: You must be this deluded to enter. And if your audience wants any of the benefits that come with your conspiracy theory – the validated sense of grievance, the two-dimensional villain, the community, the sense of purpose, and above all the compelling, addictive narrative – they have to suspend their disbelief (in all the ways you want them to suspend their disbelief).

But ultimately, it isn’t just the followers that you’re trying to provoke with your bullshit. You also want to provoke unindoctrinated, the people who would apply that hazardous critical thinking to your conspiracy theory. You’re baiting them into reacting, and confronting you, and attacking you. Why? Great question…


Step 5: Use Other People’s Hostility as Proof of your Conspiracy

In all honesty, outright bullshitting is fun on its own – but the real value comes when you provoke people on the outside and use their hostility to bolster your movement.

How? With narrative – how else?!

Don’t treat the critics as people acting on their own good faith perspective to dispute your assertions; treat them as agents of the conspiracy, knowing or otherwise, trying to tear you down on behalf of your all-powerful villain! You can use their actions to make the faithful more fearful, more confused and therefore more devoted and dependent on you!

Critics of conspiracy theories are so concerned with what’s true and false, which is what makes them so predictable. They keep trying to outsmart you with knowledge, and logic, and facts – but you’re above all that. By this point, you’ve learned the fundamental theorem of intellectual dishonesty: the narrative is always more important than the facts! So let them poke holes, safe in the knowledge that your critics don’t have the emotional strategy they need to truly undermine your movement. Then, you can take their rage, flip it, and sell it back to your audience as proof that your movement is under attack by the thought police/establishment/elites who are trying to silence you and censor the truth(erism)!

Don’t worry if, on reflection, it doesn’t make any sense. Just make sure that whatever you tell them, it gets them too angry and paranoid to reflect!

Step 6: Exploit the Devotion of your Followers for Personal Gain

Once you’ve got a strong base of followers to leverage for new recruits, now it’s time… for profit!

“But Raikespeare,” I hear some of you say. “I’m not in conspiracy mongering for the money; I’m doing it for personal interest.” And I get that; for some people, the craft of deception is the real joy for them. It’s a passion; a fulfilling hobby; an intramural sport, not a professional gig.

But I’m gonna level with you – taking a conspiracy to the next level always requires leeching off of your followers for personal gain.

It isn’t just because you can use the money to pay for expenses and expand your reach – though clearly that helps. No; leeching off of your followers is a core part of the audience engagement. Why? Because you need the leverage over your followers.

Establish a “non-profit” to promote your ideology and solicit donations. Write a book with your face on it, and sell it to them. Market some survival gear and vitamin supplements, and make it part of your brand. Establish a woodland retreat and charge your followers for tickets. Become a third party and ask for donations. Hell, if you like, start preying upon your flock and solicit them for licentious favors, if that’s your game. (And by flock, I mean your followers, not an actual flock… you know what I mean…)

Conspiracies operate off of many fallacies, but one of the most powerful is the “escalation of commitment”. If they have bought into the conspiracy – literally – it will make them terrified of backing out. They’re more likely to further invest themselves, commit themselves deeper and radicalize – all to see if they can get something out of following you.

Why?

Because the alternative is accepting that they’ve made a terrible mistake, that the squares and normies were right and your followers have been duped, that they’ve wasted their time and money and energy supporting a belief system that causes harm to them, and their loved ones, and their communities, and the world at large – and that’s time and money and energy they’re never gonna get back, no matter how hard they try!

And nobody wants to believe that, right?

Many people – perhaps most people – would go to extreme lengths to rationalize themselves out of that belief, even going so far as buying into your conspiracy even more out of desperation, at the expense of their livelihood, their friendships, and their family.

NO YOU’RE THE UNREASONABLE ONE!!!


And that’s what DIY Conspiracy Theory comes down to at the end of the day. You’re providing a convenient alternative to an unappealing reality when they don’t want to believe it.

An alternative reality where the major problems of the world are always compelling and stimulating, rather than dull, tedious and demoralizing. Where you hold claim to a revealed truth that overrules all facts, which you are able to discover without having to think very hard. Where all suffering is caused by unsympathetic, unrepentant villains, rather than complex systems helmed by ordinary, fallible, often sympathetic people with competing interests. Where you deserve everything that you feel entitled to, and the way to obtain it is by obsessing over the minutia of the lore of an epic villainous plot as laid out on a forum board. Where there is a clear ‘us’ and ‘them’, and there is nothing to be feared from people who offer a worldview that is awfully convenient – dare I say, too good to be true.

This is the what drives conspiratorial thinking – and with the DIY Conspiracy Theory platform, we’re empowering you with a personalized service, disrupting the conspiracy theory industry once and for all!

Oh.

Oh wait.

Nevermind. We have to cancel the launch of DIY Conspiracy Theory because, apparently, somebody else already invented Facebook.

Who knew?

-Connor Raikes, a.k.a. Raikespeare

P.S. Some of you may take issue with that last part, on the grounds that Facebook isn’t just a large medium for creating and spreading outlandish conspiracy theories; instead, it’s a complex social platform with the same fallibilities as the users themselves. But, isn’t that what they want you to think? Convenient, isn’t it???

Hey – I’m just asking questions here.

P.P.S. Happy Veneralia Everyone, which happens every April 1st!

Donald Trump Doesn’t Do Empathy

According to Donald Trump’s statement, he will visit Parkland, FL in the aftermath of yesterday’s shooting at Douglas High School, where nineteen Nikolas Cruz shot and killed 17 people with an AR-15.

Donald Trump may think visiting Parkland, FL in the aftermath of their horrible tragedy is a good political move, but he’s sorely mistaken.

Yeah, it makes sense in theory. It’s often valuable for a politician to present themselves as a figure of healing in the aftermath of a tragedy. Parkland is a fairly short drive from Palm Beach (which is to say, Mar-A-Lago), and let’s not forget, Florida is a swing state, so maybe he thinks he can score some political points with a pivotal strategic state.

But I think his administration would regret it.

In part, Trump has made himself such a divisive personality by design that people wouldn’t accept him as a uniting figure standing above the fray. But that’s a secondary issue; after all, he could just blame the Democrats for any hostility that comes his way as he always does.

There’s also the problem that preliminary reports indicate that Nikolas Cruz, the shooter, was a member of a White Nationalist militia – and Donald Trump does not have a lot of credibility confronting White Nationalists.

But no; the larger problem is much more basic: Donald Trump doesn’t do empathy.

It’s not a problem with emotionality; far from it. Trump is, to put it charitably, a very expressive person. He dramatizes and exaggerates his every impulse. He can manipulate the emotions of an audience; he can tap into their fears, inspire their prejudice, and control their outrage. But he has had multiple opportunities to be responsive to the suffering and hardship of people – his supporters as well as his critics – and he never does so successfully.

It’s not that he’s bad at showing empathy, or bad at being graceful in his empathy. He’s unable to demonstrate that he is capable of empathy whatsoever.

And as a President, that’s remarkable. It’s not hard to find moments of Barack Obama being empathetic, or George W. Bush, or Bill Clinton. Even politicians who might be sociopaths like Ted Cruz or Mike Pence seem to understand people enough to effectively feign empathy.

But not Donald Trump. Whenever he’s in a position to be supportive and sympathetic to someone who has suffered, he either doesn’t try, avoids the opportunity, or fails spectacularly.

Frequently, he takes on a subdued, patrician, self-important posture where he lowers his voice, and reads from a prompter. In those cases, he substitutes empathy for being generically “presidential”, such as his Address to the Joint Session of Congress. But clearly it’s not the same; the dignified words are clearly not his own, and he isn’t taking on the perspective of the victim. He’s just mimicking a person of power – an illusion which unravels the moment he congratulates himself – and he always congratulates himself.

Alternatively, he sometimes tries to find an “in”, a way to treat the tragedy as just another manifestation of the fear and anger he regularly exploits. So he tries to manipulate and redirect the subject toward, say, his anti-Muslim agenda, or anti-immigration agenda, or a personal feud. Take, for example, the time he responded to the 2017 London Bridge terrorist by launching a personal attack against London mayor Sadiq Khan – who happens to be Muslim, and critical of Trump. And maybe that works for some of Trump’s base, but most people see through that. They recognize that he isn’t really addressing their pain, and he’s taking a chance at healing and turning it into something hateful and self-serving.

In other cases, he may attempt to avoid the tragic details by insisting that people are wrong to feel hurt, and spin it into something good for him. A notorious example of this was his trip to Puerto Rico in the wake of Hurricane Maria. It was a stark contrast with Mike Pence during Hurricane Harvey, who for his deep personal faults understood to roll his sleeves up and hug families affected by the hurricane. But not Donald Trump in Puerto Rico. He took on an upbeat, chipper attitude that was sorely out-of-place, casually quipping about how much the Hurricane damage would cost the United States, playfully throwing paper towels into the crowds of displaced families, and musing that he’d give himself an “A+” for the response. (Ignoring the fact that hundreds of thousands if not millions of Puerto Ricans still don’t have power TODAY.) Hopefully, I don’t have to explain why that is not empathetic.

And finally, sometimes Donald Trump responds in all three ways. Which is what he did after the Charlottesville rally.

The fascist/KKK/far right rally, most notably included James Alex Fields Jr., a white supremacist, maliciously driving his car into a crowd of peaceful counterprotesters, injuring 19 and killing one young woman, Heather Heyer.

The President’s response to that attack was not the helpful, healing sentiments we might need or expect of the President, but a confused mess of amoral posturing that included: equating the neo-nazi violence with non-violent counterprotesters; muted crusty statement reading, and a full-blown public meltdown, where he defended Confederate statues, lashed out at the media, claimed there were “some very fine people” among the Neo-Nazi rally attendants, blamed the counterprotesters falsely for “not having a permit”.

He appears to have been very satisfied with his response.

In the aftermath, multiple people resigned from advisory roles in his administration, including the total dissolution of two whole councils – American Manufacturing Council, and the Strategy and Policy Forum. When Kenneth Frazier, CEO of Merck and an African American, resigned, Donald Trump lashed out at him in less than an hour – notably faster and more organic than the two days of equivocating it took Donald Trump to outright condemn Neo-Nazis.

Again, Donald Trump doesn’t do empathy. And once you see the pattern, it’s pretty clear to see why.

Empathy requires vulnerability. It means letting go of a position of power to bring yourself to the level of another person so that you can share their emotions. When empathy is most needed, those emotions cause suffering; they humble you, and force you to think beyond yourself. And that’s something Trump will never be able to do.

He won’t think beyond himself; he won’t be humbled; he won’t suffer for the sake of another; he won’t let go of his position of power. Not even for a moment, not even as an exercise to show he cares, not even if all his power is at stake.

He can pretend to be important. He can make up enemies to attack. He can try to trick people into thinking the victim’s suffering isn’t real. But when push comes to shove, he has nothing to offer but defensiveness, belligerence and performance.

That’s why he cannot do empathy. And that’s partly why his presidency will be a failure.

And that’s what we will see if Donald Trump goes to Parkland.

 

– Connor Raikes, a.k.a. Raikespeare