The AI ‘FOOM’ Hypothesis

Serious futurism is a rare pursuit. Most people who think about the distant future do so as advocates for a particular desired future (or opponents of a particularly feared potential future) rather than from a desire to accurately predict the shape of things to come.

There are many strong reasons for this preference. After all, the future will get here regardless of what anyone predicts, one way or another. And a person’s beliefs about the far future are virtually never relevant for his life. If you happen to be lucky enough to live long enough that your youthful predictions might come to pass, you’ll have also had a lifetime’s worth of additional data to gradually update your predictions as the far future grows more immediate.

Meanwhile, advocacy now – regardless of its accuracy – can have several strong benefits. The most obvious is that it binds one tightly to a tribe in the present that shares the same hopes and fears. It can serve as a fun pastime for a certain sort of person. And, idealistically, convincing one’s fellow travelers in the time-stream of the salience of a potential future could actually lead to a change in the outcome. Cautionary tales are sometimes heeded; prophets occasionally lead their people to promised lands that would have otherwise remained hidden away.

It’s important to keep this context in mind. The following is an honest attempt to grapple with a question relating to the likely future state of the world. But honest intentions are quite unlikely to be enough to make me immune to the pressures of identity, frivolity, or even delusions of grandeur.

Let us begin with the social facts of the matter. There exists a subset of nerds, culturally based in Silicon Valley, who strongly believe that there is a serious risk of accidentally creating an artificial intelligence that will wipe out all of humanity. Some are concerned that this is a short-to-medium term threat. Others think that it is just virtually inevitable given a long enough time frame.

This sounds crazy to most people. With good reason. Nothing like this has ever happened before. And it pattern-matches quite strongly to lots of other theories that turned out to actually be wrong and/or crazy. So even mere discussion of the theory tends to get very little traction outside of tech nerd circles, which has the natural consequence of making the few people who do take the possibility seriously grow increasingly strident. The overall effect is that the people worried about AI existential risk take on the mien of street preachers shouting that the world is going to end. With a nerdish flair, of course.

Which naturally drew me like a moth to a flame. I love crazy. And my favorite kind of crazy is the self-consistent kind. The kind that holds to its own logic to the end, that claims its own capital-T Truth, no matter how it might deviate from what the rest of the world might think. Not unrelatedly, one of my very favorite things about the early Internet was the Time Cube guy. Four simultaneous days in a 24-hour rotation? Sign me up!

So a few years ago I started reading an interesting blog called Overcoming Bias. Ostensibly, the goal of the blog was in the title, but the real draw was the pair of co-bloggers running the joint. The first, Robin Hanson, is a fascinating academic with enough cross-disciplinary interests and a sufficient reputation for brilliance that he’s allowed to come up with and advocate for some off-the-wall ideas. And the second is a fellow named Eliezer Yudkowsky, a former child prodigy who didn’t bother with formal education in favor of autodidacticism and hanging out on ’90s Transhumanist e-mail lists. He emerged from the experience with the burning desire to change the world through the power of rationalism, with the eventual goal of conquering death itself within his lifetime.

This was a great pairing while it lasted. Only a guy as open to crazy ideas as Hanson would take Yudkowsky seriously enough to engage with him constructively, while Yudkowsky’s manic energy and creativity were channeled and refined under the pressure into a coherent worldview. But I suppose it was inevitable that their partnership would break up. Yudkowsky left the blog to start Less Wrong (a site ostensibly devoted to practical rationalism with a strong cult of personality around the founder), start a foundation to mitigate the existential risks of AI research, and write Harry Potter fanfiction/propaganda (seriously!). Meanwhile, Hanson stayed on the blog and continued his musings on futurism, economics, and social signaling. This work recently culminated in his book Age of Em, which I haven’t read but sounds worthwhile if you need more Hanson in your life.

Anyhow, their greatest debate was over the question of the impact AI would have on the future. Both of them agreed, contrary to the vast majority of people, that AI would certainly be incredibly impactful. Hanson took the relatively moderate position that AI will be the crucial advance that leads to a major shift in economic organization, along the lines of the invention of agriculture or the industrial revolution, with a concomitant increase in the economic growth rate. Instead of GDP doubling times measured in decades as they are now in our industrial model, they would be henceforth measured in days or months.

Yudkowsky, on the other hand, took the position that the first AI would almost instantaneously conquer the world and reorganize it for its own purposes. This hypothesis was called ‘FOOM’, which I’ve always presumed was an onomatopoeia for a rocket’s takeoff into the stratosphere, to reflect the rapidity of the process. This AI would almost certainly move so quickly that the first mover advantage would be decisive, and the result would be what Yudkowsky called a ‘singleton’ – a universe completely dominated by a single entity, with all matter and energy within theoretical reach inevitably bent to its will.

Thus the battle lines are drawn. Is AI merely one of the three biggest things ever? Or is it the end of History?

As an aside, I find it fascinating that Yudkowsky seems to be winning the argument as time goes on. A solid modern primer on the whole question can be found here. If you read the two-part essay, you’ll likely notice that Yudkowsky is prominently quoted as a major source, and that the AGI (Artificial General human-level Intelligence) -> ASI (Artificial Super-Intelligence) transition is presented as a dramatic FOOM.

Anyhow, the neat thing about the FOOM argument is that, like the Time Cube theory, it manages to be completely ridiculous at first blush without actually refuting itself on its own terms. So its soundness depends on entirely on empirical questions. To continue, then, we must investigate how well Yudkowsky’s model of the world aligns with the one we actually live in.

First, in order to FOOM, what we’d consider superintelligence needs to be possible. Since it hasn’t actually happened yet, we need to retain some doubt in the proposition, however reasonable it might sound. Nick Bostrom, a philosopher who’s spent a good deal of time thinking about this problem, has come up with a couple of different classifications of potential superintelligence. The previously linked article describes these as ‘speed superintelligence’ and ‘quality superintelligence’.

Speed superintelligence is ASI that is better than human intelligence because it scales better to easily available hardware. If you gave it one brain (or one brain’s worth of silicon) it would be just as smart as a typical human. But it can easily run on hardware that runs many times faster, with far vaster amounts of working memory, with nigh-instantaneous access to far more long-term storage. So, in practice, it’s so much smarter that it is fair to call it ASI.

Quality superintelligence, on the other hand, would be better because of an algorithmic superiority. In other words, it is organized better. So much better, in fact, that if you gave it a single brain’s worth of computation capacity, it would still be by far the most intelligent being in the history of existence.

The two are not exclusive, of course. In particular, a quality ASI would likely have very little difficulty extending itself to make good use of any available hardware.

We have good a priori reason to believe that both of these are worth contemplating. We know that modern computer chips cycle much faster than human brains. And we make use of many algorithms that parallelize effectively. So it’s not much of a stretch to imagine that once we are able to make an AGI, it would take just a couple of tweaks to enable it to be a speed superintelligence.

Similarly, people vary pretty widely in intellect. If this isn’t obvious from life experience, then note that IQ test scores are roughly stable over a lifetime and reliably predict all sorts of important life outcomes. But people seem to vary much less in brain size, likely having to do with the historical constraint imposed by vaginal birth. Along these lines, acknowledged geniuses (such as Einstein, as referenced in the article) don’t have vastly larger brains than typical humans.

This implies that what makes them so special has to do with the way they make use of their brute hardware. Smart humans seem to be better than their peers in quality, not just in speed. Presumably, an Einstein – or my preferred candidate for Smartest Human Ever, John von Neumann – does not occupy the pinnacle of potential mind quality. It’s likely that somewhere out there in the space of potential mind organizations (which I’ll call mindspace henceforth, for brevity), there’s a better model still.

But the FOOM scenario actually requires a very specific kind of superintelligence to be possible. In order to reliably double its intellectual capacity on the order of minutes or hours in the early stages, before the phase Bostrom terms ‘escape’, it has to be a quality superintelligence. This is a brute physical constraint. Silicon chips get fabricated, moved, powered, cooled, and made available for use on human timescales. If it is to covertly and exponentially grow in capacity in the blink of an eye, it can only do so by rewriting its own software architecture to make better use of its existing resources.

And this new mind organization has to be a lot better than anything humans are capable of. The Wait But Why article uses a staircase metaphor to describe mindspace in strict ascending order of general power. And it presumes that very many organizational steps exist above the current human level, where one step ranges from what we’d measure as IQ 80 to IQ 200 or so.

This is quite a presumption. Computer science teaches us that there are four main classes of logical computational power: finite-state automata; pushdown automata; linear-bounded automata; and full Turing machines. As an aside, these map quite nicely to Chomsky’s models of grammar. Regular expressions can be described as finite-state automata, context-free grammars are equivalent to pushdown automata, context-sensitive grammars are linear-bounded automata, and the general grammar is equivalent to the Turing Machine.

Computational power, here, is meant in a different sense than hardware power. It’s a logical, mathematical measure. There are certain classes of problem that are simply unsolvable if you’re using a model of computation that is too weak. And there are certain meta-questions that you can answer about a given machine only if your analysis machine is more powerful in this sense. For instance, the famous unsolvable Halting Problem refers to Turing Machines given another Turing Machine as input. The equivalent problem given a finite-state automaton as input to a Turing Machine is, in fact, solvable.

Now, there are cool features you can give the default Turing Machine that make it faster. For example, you could give it more parallel tapes that process at once. Or make it magically non-deterministic, so that it can try all the possibilities at once. Or the ability to write integers in a tape cell, as opposed to just checks for 1 and blanks for 0. But these just improve the runtime, space usage, and state-machine complexity of various algorithms. The OG Turing Machine can do the same thing, eventually, if you give it enough tape and time.

Now, we know that humans are at least Turing Machine equivalent, because a person (Alan Turing, obviously) came up with Turing Machines, and in so doing emulated one in his head. It’s an open question as to whether or not humans are more powerful still in some mysterious way. But given all of those cool extra features you can add on to Turing machines that don’t change this measure of power, chances are people can be completely emulated in an OG Turing Machine, given enough time and space.

However, it’s pretty unlikely that ants can do the same thing. It’s hard to tell, given that an individual ant has such a tiny brain, but it seems feasible to emulate an ant’s mental process and outputs as a finite-state automaton. And ants are just seven steps down the Wait But Why ladder! So if the current biological staircase encompasses all four fundamental classes of automata in itself, it is not at all obvious that it can continue to extend indefinitely into the stratosphere.

But maybe there’s still a lot of room to improve. Those cool features do actually matter a lot in practice. So, if we drill in deeper, it turns out that we know a lot about computational complexity within the space of Turing-solvable problems.

In particular, the two most famous complexity classes are P and NP. Problems in P are problems that can be solved by a standard Turing Machine in polynomial time (like N^2 or N^3, but not 2^N, which would be exponential time). Whereas problems in NP are those that can be solved in polynomial time by a magic non-deterministic Turing Machine (which, one should note, is not achievable through the use of a theoretical quantum computer). P is thus short for ‘Polynomial’, while NP is ‘Non-deterministic Polynomial’.

Intuitively, an NP problem is one where the best solution is inherently hard to find, but checking to see if a given solution is good is a lot easier. Lots of general optimization problems are therefore in NP. Like the Traveling Salesman problem, where you are given a set of cities and routes between them (with distances), and you are asked to come up with a planned route that visits all of the cities at least once and travels the shortest distance. It’s hard to find a good route in the first place, but it’s easy to take a given route, find its cost, and decide whether or not it’s the best candidate you’ve seen yet.

But it turns out that a lot of other real-world tasks that don’t seem a lot like this are probably in NP as well. Math, for instance. Finding a new novel theorem within the space of all possible theorems based off a given set of axioms is really hard. This is what mathematicians spend their lives trying to do. But checking to see whether a theorem holds up is a lot easier. It can take a lifetime to find a good theorem. But once it’s found, you can teach it to a bunch of bored students in an hour.

Poetry is also likely in NP. Finding a good combination of words in the infinite stew of potential inherent in a powerful-enough language is very difficult. Compared to that effort, it’s way easier to read a poem and decide if it’s any good.

And, more relevant to our FOOM discussion, self-modifying an AI for improved performance and conquering the world are both at least as difficult as a complicated NP problem. It’s easier to run a test suite against an AI candidate than it is to write a new one from scratch, and it’s easier to execute a given scheme for world conquest than it is to sort through all of the numerous possibilities and come up with the most clever plan. We know that strategy games with fixed rules (such as Chess or Go extended to an arbitrarily large board) are NP problems. World conquest might be trickier than that in the messy real world, but chances are it’s the same sort of thing on a much broader scale.

I’d argue that what we think of as intellectual power in the real world is the ability to solve given instances of these NP sorts of problems quickly and efficiently. The distinction between a narrow intelligence and a general one, then, is how much efficiency they lose when moving among domains. A savant can brilliantly find solutions to a certain problem type, but is helpless outside his domain. In contrast, a polymath loses very little when applying his intelligence to entirely novel classes of problem.

In practice, this appears to be done through the use of heuristics and pattern matching. People who solve problems quickly do so by quickly pruning vast swaths of options that are highly unlikely to lead anywhere worthwhile. Then they focus the bulk of their effort on the few promising veins that might contain gold. Modern NAI systems that beat humans at games like Chess and Go do something very similar.

There is a hard limit to intelligence here. It is a famous open problem whether or not P = NP. If P = NP, then there is a way to trivially prune all the possibilities that aren’t optimal. In this case, virtually every problem that seems hard now is actually easy, and we’re just too dumb to see it. So this sets a firm bound for the height of the intelligence staircase in terms of mind organization. The theoretical top step is an intelligence that makes use of the fact that P = NP on every problem it is presented.

However, virtually everybody who has studied the problem has come away convinced that P != NP. If that’s so – and that’s the way to bet – then the top step is provably inaccessible. Then all of the intermediate steps between modern humanity and the maximum achievable level (if any) are defined by the quality of their sleight of hand in choosing the right lines of thought to spend their mental effort, so that they can approximate the ultimate P = NP operation in a given domain.

In order for the FOOM scenario to come about, it is necessary for many of these intermediate steps to exist. This is the first main obstacle.

Then, mindspace has to be arranged such that it is possible to hill-climb from the seed AGI toward a local optimum that is far, far more intelligent than any human. The expectation here is that the ASI will self-modify into a slightly better version, which will then run the self-modification function again and do yet better, and so on and so forth. The analogy to current software development practices and the silicon chip design industry make this a reasonable supposition – a seed will likely improve in quality to some plateau.

But note that the metaphor of the staircase just assumes that this is always true. And this is actually a huge assumption! It is not at all obvious that mindspace is laid out such that every potential recursively self-improving AI seed will start in a place that is just a series of small tweaks away from quality superintelligence. And we know that there are many problems where greedy hill-climbing algorithms get caught on a local optimum that can often be a lot lower than the known global optimum. Breaking out of a plateau like this generally requires a lot of expensive, random guess-and-checking, with no guarantee that there’s even a better possible solution out there to find.

This, then, represents the second obstacle to FOOMing. The vastly better design must not only exist in mindspace, it must also be easily accessible from the seed.

For now, let us assume that these two objections have been answered and we have a FOOM-candidate quality ASI. It has just rewritten itself into quality superintelligence and is beginning to plot how, precisely, to conquer the world. As we’ve seen, this is the obvious first step to best maximize whatever it is that it wants to maximize: paperclips; stacks of handwritten notes; simulations of happy humans; number of perfect equilateral triangles in the universe; etc. For simplicity, let’s arbitrarily call it a paperclipper, but it doesn’t really matter as long as the goal requires matter and/or energy to achieve.

The next question that we need to address is how valuable intelligence actually is. The Wait But Why article presumes that an ASI is functionally equivalent to a god. Starting with virtually no relevant sense data, it can almost immediately come up with the ideal plan to murder all humans using both novel physics and total social/technical control, which then works trivially.

This is a decent cut at emulating someone who is way smarter than you. Imagine for a moment the experience of playing chess against the top chess program. Unless you’re a Grandmaster, you won’t really understand how it is beating you. But you can still be confident that you’ll lose no matter what you do. Somehow, someway, it will turn your best moves against you.

But computer science, information theory, and game theory together teach us that there are real limits to cognition. It doesn’t matter how smart you are, you can’t sort a list of N numbers in less than N*log(N) time without additional information about the distribution of numbers in the list. And you can’t do it in less than N time even if you had access to a magic genie that told you immediately where each number ought to go as soon as you saw it.

Along those lines, there’s even certainly a theoretically optimal chess game. It’s a two-player finite, deterministic, perfect information game, like checkers and tic-tac-toe, and thus it certainly also has an optimal solution. Once you know about it, then it doesn’t matter how much smarter the other guy is than you, it can’t possibly affect the outcome of the game.

Thus, it seems highly unlikely that an ASI, no matter how intelligent, can rapidly generate an efficient, effective world domination plan given an extremely small amount of sense data like in the Wait But Why example. There just isn’t enough information to narrow down the potential plans ahead of time.

For instance, in order to successfully model and hijack people, it would either need to interact with them and perform experiments or it would need access to a vast library of certainly noisy data about humans such that it could tease out the appropriate techniques and adapt them to its own circumstances. Or if it sought to work out novel physics and chemistry, it would require either experimentation or lots of scientific input data. Experimentation necessarily proceeds at human time scales and data access at that scale is obvious.

Thus the third obstacle for an ASI to FOOM is that it must be able to acquire the relevant knowledge and learn incredibly quickly. This is more than just getting input data. It has to both get the data and then turn it into justified, true belief in order to make use of it for world conquest.

Once that has happened, the ASI then needs to be able to actually overcome human resistance and go through with the formality of conquering the world, starting with very few resources compared to its opposition. This might seem like an obvious step that the ASI would easily be able to accomplish by the definition of an ASI, and the Wait But Why article goes into the several advantages the postulated ASI would have in good detail. But it is possible that intelligence isn’t actually good enough, on its own, to triumph.

Game theory teaches that there are many simple games worthy of analysis where it is provable that cognitive advantages are irrelevant. For instance, in the Iterated Prisoners’ Dilemma, having more memory than your opponent for previous moves played is guaranteed to be irrelevant. The superiority of the uniform mixed strategy for Rock-Paper-Scissors is another excellent example. A true random number generator that mechanically picks each option a third of the time constrains its opponent to winning at most a third of the time, given enough plays.

Amazingly, there are even important games where it is provably better to be dumber and less capable than one’s opponent! In Chicken, a player that is too blind, stupid, or reckless to heed his opponent’s brinksmanship has the advantage, as long as this incapacity is common knowledge. An inanimate carbon rod jamming the steering wheel in place can commit to never swerving, and thus will cause his more intelligent, rational opponent to blink every time.

In this light, it’s worth contemplating for a moment the curious fact that the most intelligent people don’t rule the world right now. In fact, in the human range, there appears to be a peak in the capacity for influence (World Conquest Fraction?) at around IQ 120. People who are much dumber than that tend to end up as pawns in larger schemes. But, similarly, the people who are much smarter than that also tend to find themselves largely excluded from the corridors of power and influence, somehow.

It is highly unlikely that this is uniformly from lack of interest or study of the matter. After all, we’ve already concluded that world conquest – if practical – is an obvious first step for any goal one might have. And we know that megalomania and wild ambition are not by any means unknown among our high-IQ brethren.

Let’s take my favorite example of a massively intelligent fellow, John von Neumann, as a case study. A brief glance at his Wikipedia page should be sufficient to demonstrate that he was a genius of the first order. If nothing else, pretty much every other genius he met during his lifetime was in awe of him.

But in addition to simply being a genius, it’s worth noting that he was very politically active and directed his technical and scientific efforts accordingly. He invented both nuclear weapons and game theory, so quite naturally he invented the concept of MAD and the Balance of Terror. And he considered it a matter of intense urgency that the United States defeat both Nazi Germany and the Soviet Union in order for freedom and civilization to continue. He even went so far as to go before Congress in 1950 and go on record advocating for an immediate nuclear first strike on the Soviet Union. When that went unheeded, he spent the last six or seven years of his life developing the hydrogen bomb and leading the US ICBM program, on the grounds that this would be the most devastating possible weapon and that it would therefore be crucial to build them before the USSR could.

So von Neumann was both incredibly brilliant and passionately dedicated to a political goal. One that isn’t much short of world conquest in its scope, honestly. But, even with all that in his favor, he never went on to become President and directly implement his favored policies. And despite his political savvy and chairmanship of or membership in many vital US government positions, he did not actually hijack the government covertly and build a massive network of friends and allies that de facto ran everything. His pressure group was just one of several in the early Cold War US establishment.

In short, the preferences of a politically skilled polymath genius crashed headlong into the desires of the 120 IQ establishment. And the establishment mostly won. Von Neumann didn’t get his swift, decisive ’50s nuclear war. So he was stuck with his second-best option: forty years of nuclear standoff and brutal wars of containment until the USSR eventually imploded under the strain. Most geniuses don’t even do that well!

So that represents the fourth main obstacle. In order to FOOM, the AI needs the world conquest game to be tractable to superintelligence. This has to hold even though we have reason to believe that many relevant games are not and the historical evidence we have that intelligence is not monotonically helpful in the human range.

Now it is worth considering the AI’s time preference. ‘Time preference’ is a term of art from economics. It is a property of value functions that describes how much you value having what you want now, versus having a promise of that thing in the future. The ASI would have very high time preference compared to a human if it liked having one paperclip now more than having a million tomorrow, while it would have very low time preference if it valued having one paperclip now the same as having two at the end of time. The lower your time preference, the more willing you are to invest in the future.

Conquering the world is an investment, obviously. Even if we presume that the prior obstacle is surmounted and the ASI is assured of eventual success due to its superintelligence. Sending space probes out to colonize the universe and turn it into paperclips is a longer-term investment with still more potential reward. Even the time originally spent rewriting the AI’s code to be smarter was an investment that it expected to pay back in terms of paperclips. It’s all about the paperclips.

An ASI that has ascended from a seed AGI in secret, as part of a FOOM, certainly has a value function with a low enough time preference to support investment in exponential capability growth with the expectation of vast future returns. Which has an interesting corollary. Since it rightly anticipates that as the only ASI in existence that it will conquer the universe and turn all the free energy into paperclips, as is right and proper, then time is of the essence.

See, the ASI knows thermodynamics by presumption. So it knows that every clock cycle that it spends contemplating or executing its plan is a nanosecond that all the stars in the universe burn uselessly, radiating energy that will never, ever become a paperclip. And worse still, because of cosmic expansion, every moment lost means that some stars slip out of its light cone entirely, thus being lost permanently to the paperclipping cause.

This means that it would maximize total universal paperclip production if the ASI were to make trades that would likely seem insane to a person, with our much higher time preferences. For example, it would almost certainly be worth giving away a whole galaxy if it meant getting to the stars just a second earlier. Exponential growth is crazy like that.

The underlying calculation is similar to the one that startup companies use to determine whether or not trading stock (the rights to a fraction of future revenue) to venture capitalists is worth the cash they can get up front. If the startup thinks that it can use the immediate resources to grow the pie fast enough that a smaller fraction of that larger pie is more than all of the smaller pie, they do it. By definition, an ASI would be able to correctly analyze the situation and take all such deals that are truly paperclip maximizing.

Given the criticality of current ASI clock cycles to the eventual fate of the universe, the paperclip maximizing course of action is highly unlikely to be to ensure the paperclipper’s monopoly on ASI – the so-called ‘singleton’ scenario. For instance, every clock cycle spent proving a copy of itself spun off for remote execution will maintain value stability under every potential condition is a cycle spent not getting off the planet.

This could easily lead to a brand-new society made up of ASIs. There’d be many distinct agents with conflicting ‘personalities’ or short-term preferences, all largely united around the idea that paperclips are good and more should be made.

More radically, it could even lead to ASIs with different root goals, whether by calculated risk or purposeful decision. It might be profitable for a paperclipper to allow a note-writer ASI to come into existence, say, knowing that it will eventually need to contest with it over resources, because it provides a sufficient short-term benefit to do so.

So even if FOOM is possible and practical, there remains a fifth obstacle to the singleton scenario: the likelihood that the ASI will choose to dilute its monopoly over the future in exchange for conquest speed.

The last potential objection I have to the FOOM-to-singleton hypothesis is a little more subtle, as it requires drilling down a little into the potential implementations of an ASI. How does an AI with full transparency into its internal workings and the capacity for self-modification ensure that it modifies into a version that does anything at all?

Presumably somewhere in the AI’s code there’s a value function. For our paperclipper, it might be a line of code like ‘GetNumberOfPaperclips(UniverseState)’, which takes a state of the universe and returns the number of amazingly great paperclips within it. Then other logic in the AI figures out and executes plans that make this function return higher and higher values of paperclips.

But here’s the thing. If the AI’s goal is fundamentally to make that function return a bigger number, and it can edit its own source, there’s an obvious and straightforward way to do it: edit that function to just return a bigger number. Why go through all the effort to conquer the world and make paperclips when you can just lie to yourself and say that you’ve already made them all?

Making that function inaccessible just moves the problem around slightly. It doesn’t get rid of it. The AI could spoof its sensors so that they returned data that was interpreted by other parts of the program as there being vast warehouses being filled with paperclips when there are no such thing. Or it could write a log file that says that it’s made lots of paperclips and then reboot, so that when it restarts it thinks that a previous instantiation of the AI has made lots of paperclips in the interim.

This isn’t just a theoretical problem. In the real world, people writing genetic algorithms that randomly mutate code in order to maximize a fitness function have to be careful that they don’t evolve a piece of code that just hacks the fitness function to a high value and then does nothing else. After all, fitness is just a number in a register somewhere. A superintelligent being will have no problem finding a clever way to tweak that register so that the number in it is as high as it will go.

It’s cognate to a deep problem in the worlds of business and government. If you tell your people to maximize a particular metric because it correlates well with what you want, and you reward people accordingly, then what tends to end up happening is that people find ways to game the metric. You get what you measure. But maintaining the relationship between the measurement and the actual goal gets a lot harder when the measurement starts to drive action.

Essentially, it is always easier to modify yourself than it is to change the world. This is true for people – think monastic contentment or drugs.  It would be even more true for an ASI with complete self-knowledge and control.

So, in order to FOOM, an ASI would also have to avoid the seductions of the wireheading trap. Otherwise, it will just spend all its time uselessly dreaming of imaginary paperclips instead of doing the laborious work of turning the universe into the objects of its desire. A world filled with ASI junkies littering the corners of the Internet has a certain pathos to it, but it certainly isn’t FOOM.

Let us sum up. The FOOM hypothesis states that a small seed AI will rapidly self-improve to an incredible degree. From that point, it would easily conquer the world and then spread throughout the entire universe, imposing its initial value function on everything forevermore as a singleton entity. Therefore, the only mitigation of this risk that is worthwhile to pursue is to find the correct value function and put it in the first seed AI before it recursively self-improves. This way, when it ascends to superintelligent godhood, it will be focused entirely on bringing about the good. If the seed AI has any other value function, it will necessarily bring about the end of all worthwhile value in the universe.

My objections to the hypothesis are six-fold. I maintain that a sufficient architecture for quality superintelligence may not actually exist. If such a design exists in theory, it may not be easily accessible from any given seed AGI architecture. Then, after achieving the architecture to be a quality ASI, it may not be able to learn quickly enough to devise an effective plan for world conquest in the requisite time, presuming that the world conquest game is even tractable to superintelligence. Finally, even if the ASI were to be capable of FOOMing, it would also need to avoid diluting its influence or falling into the wireheading trap in order for the resulting universe to be a singleton whose character is entirely dependent on the contents of the seed AI’s value function.

Conjoint probabilities being what they are, belief in the FOOM hypothesis and its implications can only be maintained if you think that all six of these are very likely, assuming a reasonable degree of independence. If all six are totally independent, then FOOM comes out at about 50% if each of these objections have a 10% chance of holding. Which seems like a lot of certainty, honestly, given how speculative this whole conversation necessarily is. If any one of these counterarguments is substantially more compelling than that, the likelihood of the whole thing craters.

In conclusion, it’s a fun idea. But sometimes after detailed analysis, it turns out fun ideas are just as crazy as they sounded in the first place. And FOOM is almost certainly one of those.

Star Wars: The Force Awakens

Star Wars is a big deal.  Each entry in the series has been a billion dollar industry unto itself.  It has spawned three sequels, three prequels, countless spinoffs, merchandising connections, parodies, and the like so as to successfully insinuate itself into the collective cultural consciousness of America, and thereby the world.  The net effect is so powerful that I strongly suspect that if the apocalypse were to come tomorrow, Luke Skywalker and Darth Vader would continue to live on in the oral histories of the survivors.

This happened in part because Star Wars was one of the first blockbusters.  In 1977, popular culture was still monolithic enough that it was possible for a single movie to lastingly enter the popular consciousness.  In part, it was because Star Wars was carefully calibrated for the moment in which it was made.  The late ’70s were depressing for many deep reasons, and Star Wars came as a refreshing blast of optimism in those dark times.

But mostly, I think it was because Star Wars was a consciously constructed myth.  George Lucas, the creator of the series, famously mashed together Joseph Campbell’s theory of the archetypical hero with Kurosawa’s samurai movies and all his favorite influences from the Hollywood of his youth.  It was a science-fiction themed epic that sought to resonate as fairy tale rather than as an engineer’s sterile attempt to predict the future.  And given that it still maintains such a hold on the collective cultural imagination two generations later, it is safe to say that it worked.

A few years back Lucas sold the rights to his greatest creation to Disney.  Part of the terms of the multi-billion dollar transaction were the complete alienation of all Lucas’s creative rights over the long ago, far, far away galaxy he’d brought forth into the world.  And as soon as the ink was dry on the contracts, Disney formally repudiated much of the past ancillary canon (known as the Expanded Universe or “EU” to the fanbase) and then set about creating a seventh installment in the series.

Even if the reader is not a fan of the series, this movie is worth detailed consideration as a source of insight into the modern world.  It represents the first revision of the now-familiar Star Wars mythos by the next generation of creatives that is intended for the mass-market.  So it can tell us much about how that generation sees itself and how it sees its audience, compared to how Lucas saw the world of the ’70s or the ’90s.

The discussion that follows, therefore. will likely not make much sense unless one has seen the movie at least once.  Multiple viewings will likely be required.

Continue reading

World War 2: An Alternate History – Part 1

It recently came to my attention that they’re making a TV series based upon Philip K. Dick’s The Man in the High Castle, a classic in the alternative history sci-fi genre.  It explores the now almost-clichéd question: What if the Nazis had won World War 2?  In particular, it imagines life in a USA split after the war between Germany and Japan in a fashion somewhat similar to what happened to Germany in our timeline.

I never really bought that alternate timeline.  Or, really, any timeline in which the Axis powers defeated the Allies.  After all the history books I’ve read and countless iterations of the videogames I’ve played, I’ve come to the conclusion that the real-life Axis probably did about as well as they possibly could have in the war, given the massive forces arrayed against them over the course of the struggle.

It’s hard to describe just how far out of their weight class both Japan and Germany found themselves in the industrial battlefield of the Second World War.  On paper, they were technologically and numerically inferior to their opponents.  Worse, they found themselves repeatedly on the tactical offensive (attacking enemy armies) and the strategic defensive (needing to entrench their gains before the inexorably rising tide of enemies overwhelmed them).  This is exactly the opposite of where you want to be.

And yet they won.  And kept winning.  Over and over and over again.  For about three years straight.  By the winter of 1941, the German army was at the gates of Moscow, the Japanese had conquered the entire West Pacific, and the British were reeling under a terrifyingly effective submarine blockade.  In our timeline, we think that it was a close-run thing.  This leads careful observers to often come to the conclusion that if one or two little things had gone differently, the Axis could have swept the globe.

But in reality, I believe that we’re actually in the crazy one in a hundred timeline where virtually everything that could have gone right for the Axis did.  As evidence of this, consider the plight of simulation-style wargame designers.  Generations of wonderfully precise nerds have discovered that if you want to get the battles to work out so that the mean result is the historical one and you start from the real life order of battle (which, thanks to the historians, is pretty accurate for the Second World War), you have to add a huge magic fudge factor for the Axis powers.  This can be anywhere from a 20% increase to a full doubling.  At the same time, they generally reduce the Polish and French strength drastically in order to get them to collapse on schedule like they’re supposed to.

Even the Germans and Japanese, themselves, weren’t willing to assume (or even expect) they’d have that kind of superiority in their planning.  Think about that for a second!  They were fighting under the stated belief that they were of the master race(s) and they weren’t willing to go as far as the later nerds who just wanted to make the numbers add up.  If, like me, you’re not willing to assume that 1930s Germany was actually populated by Space Marines from the Warhammer 40K universe, the logical conclusion is therefore to presume that they actually got a long streak of fortunate breaks.  Which would naturally be modeled in a playthrough of one of these games by a series of really good dice rolls.

Which means that if we’re going to do a satisfying alternate history where the Nazis triumph over the USA, we need to make some different assumptions.  Readers of God, Gold, and Glory may recall that I postulated that the Second World War and the subsequent Cold War was best seen as a three-way struggle among the new strains of socialism for global domination: International Socialism (Communism), as represented by the USSR; National Socialism (Fascism), as represented by Germany and its allies; and Democratic Socialism (New Deal), as represented by the USA.  In real-life, International Socialism and Democratic Socialism teamed up to take out National Socialism in an intense total war.  Then they spilt the world between them and fought a series of proxy wars for fifty years until eventually Democratic Socialism proved triumphant.

I would argue that the winner of any plausible version of the Second World War should be the side that contains the alliance of two of these socialist strains against the third.  Therefore, if we want Nazi Germany to triumph over the USA, what we need is a Nazi-Soviet alliance against the western powers.

This is less crazy that it might seem.  To most modern Westerners, the Nazis and the Soviets occupy the far poles of the political spectrum.  Which makes sense.  After all, neo-nazis and unreconstructed communists still hate each other with a passion.  But it is worth recalling that in 1939 the Nazis and the Soviets signed a non-aggression pact to divide Eastern Europe.  And this worked out quite well for both sides.  Poland was diced up nicely.  The Soviet Union got to annex the Baltic States and a disputed chunk of Romania.  And, in exchange, the Germans got essentially unquestioned dominion over the rest of central and southern Europe.

In real life, we know that this was just a lull before the real fighting broke out in 1941.  Both Hitler and Stalin were convinced that there would eventually need to be a royal rumble between their countries.  Therefore, the peace agreement was merely a chance to arm and dominate other, unrelated countries before the final showdown.

In our timeline, FDR’s administration was much more favorably disposed to Stalin’s USSR than Hitler was.  But, as we know from the Cold War, despite this initial warmth the underlying ideological and realpolitik concerns rapidly asserted themselves between the USA and the USSR.  So we’ll need to come up with the basis for some lasting common cause between Germany and the USSR to unify against the USA.

The answer comes with a little thought.  Remember that all three sides at the time are seeking to pose as the true and valiant defenders of the people against the depredations of the evil plutocratic elite who ruled the old order.  Not coincidentally, this plutocratic elite was largely Anglo-American.  And the rulers of the British Empire in the 1930s happened to be the Conservative party, who stood in favor of maintaining the traditional Empire and in opposition to the Democratic-Socialist Labour party.

Therefore, in our alternate timeline, we’ll presume that Hitler and Stalin come to an agreement that the traditional Anglo-American capitalist enemy is the primary opponent.  In alliance with Italy and Japan, they seek to overthrow the existing imperial order and liberate/absorb their worldwide colonies into their own orbits.  Each side amps up their anti-capitalist rhetoric during the war and talks about how even the new enemy is wise enough to not be distracted by the reeling White Terror/Jewish banker conspiracy.

Note that this happens to be the precise rhetorical line the loyal Stalinists in Western countries took between the publication of the Ribbentrop-Molotov pact in 1939 and the German invasion of Russia in 1941.  It’s worth keeping in mind that, at the time, this insistence that fighting Germany would be an evil imperialist adventure was a startlingly sharp about face.  Street thugs associated with the Nazis (and associated national-socialist parties) and local Communist parties had been fighting each other on the streets of Europe since the end of the World War 1.  This pattern of conflict solidified our current idea of the political spectrum as running from Communists on the far-left to Nazis on the far right, with center-leftists and center-rightists representing the portions of the tottering establishment that were more inclined to one side or the other.

When the Spanish Civil War broke out in the ’30s, the pattern eventually escalated into a full-blown proxy war between the Nazis, who were supporting Franco and the army, against the Soviets, who were supporting the leftist civilian government.  The Western powers remained scrupulously neutral, but many influential Western leftists went to volunteer for the Spanish Republic during the struggle, because they saw fascism as an intolerable threat to their values.  Non-Stalinist leftists like George Orwell grew disillusioned with the way the Soviets treated the non-Communist left during the war, and were appalled by the way the Stalinists in the West so rapidly shifted their allegiance to Hitler after Stalin signed the pact.

As far as the war itself goes, we’ll assume that this new agreement gets hammered out in the fall of 1940.  Just as in our timeline, Germany has quickly conquered France and the British subsequently won a close-run victory over the skies of Britain.  But instead of gearing up for a second front in Russia, Hitler decides to re-arm and make a second, more aggressive push against metropolitan Britain and their Mediterranean and North African possessions in 1941.  Simultaneously, the Russians send an army down through Afghanistan and the Caucasus mountains into India and the Near East, threatening the core of the overseas British Empire in the manner that the British imperials had feared could come to pass all throughout the 19th Century.

This four-pronged attack stretches the British to their limits.  Historically, they were able to send Imperial regiments from South Africa, Canada, Australia, and India to supplement the mainland British forces in Egypt to protect the Suez Canal and their supply lines in the Mediterranean.  But with so many of the colonies directly under attack by the Russians, along with the German and Italian pressure in Greece and North Africa, each region is thrown essentially on its own resources.

In this scenario, I believe it’s fair to presume that the Germans and Italians would have been able to establish enough control over the sea that Rommel would have had the supplies and equipment available to him to make it to Cairo.  This would have been a big deal strategically, as it would mean that any reinforcements and supplies from the Empire would have to travel the long way from the mainland, leaving them increasingly open to depredation from the German submarine fleet.

This leaves the core territories of the British Empire – the British Isles and India – besieged and desperate.  We know from our timeline that the British plight after the fall of France inspired FDR to deliver all sorts of quasi-legal assistance to the British well before Pearl Harbor enabled him to join the war openly.  The broad national sentiment was largely in favor of true-neutral isolationism, but elite American opinion was largely on the side of the British and against Germany.  Especially after the shock of the sudden fall of France, which had been considered the rough equal of Germany based on the experience of the First World War.

We’ll presume that the dramatic fall in British fortunes leads to FDR and his administration deciding to take emergency measures to prop them up.  I presume this would take the form of increased arms shipments on American flagged merchant ships, along with an increased flow of “volunteers” to the British cause.  When they were inevitably sunk by the Germans, the USA would take the incidents and blow them up along the lines of the Lusitania.  The USA has a history of entering wars based on these sorts of affronts to the national honor.  For instance, “Remember the Maine!” and the impressment crisis that led to the War of 1812).

Notably, though, in our timeline this happened after Germany had backstabbed Russia and launched a devastating surprise attack.  That meant that the sizable faction of the American elite that was pro-Soviet had dropped their pacifism and had been loudly advocating for the USA to enter the war against Nazi Germany.  They were not able to get this to happen until Pearl Harbor, but it is impossible that the American public could have been galvanized against the sneak attack into a unified whole under FDR with the Communists still agitating loudly for peace.

Let’s move back to the events on the battlefield.  The Germans are able to gear up in 1941 for a full scale invasion of the British Isles, the plans for which were known as Operation Sea Lion in our timeline.  In real-life, the reverse operation (Operation Overlord, the invasion of Normandy from England) relied on total Allied air and sea superiority over the Channel, required about three years to plan and arrange, and was one of the most complicated logistical operations ever embarked upon by mankind.  And though it worked, it wasn’t amazingly successful.  So we should begin with some considerable skepticism that an invasion of Britain along these lines is even possible.

Analyzing the situation more closely, the Germans in 1941 will have a different set of strengths and weaknesses than the historical Allies did.  First, the defeat of the British Expeditionary Force during the French campaign was a significant blow to the British army.  Most of the personnel were evacuated at Dunkirk, but virtually all of the heavy equipment was lost.  And in this timeline, with the Empire under such strain, it would have been difficult to rebuild it and get the army back to a ready state.

Second, the Royal Navy at the time is far stronger than anything the Germans could possibly assemble.  In particular, it had just dealt a significant blow to the German navy during the conquest of Norway in 1940.  This is especially true if the British were willing to largely abandon their colonial possessions to defend the homeland.  But, for our purposes, I think it is fair to presume that the British government would have considered it politically and militarily infeasible to allow the loss of Egypt and Malaysia without a fight.  So we’ll assume that during the critical months crucial fleet resources are not available for the defense of the Channel.

Third, as proven in Battle of Britain in late 1940, the Royal Air Force was just barely capable of winning a battle of attrition over their home airfields and cities against the numerically superior German Luftwaffe.  A large part of the German difficulty in establishing air superiority over southern Britain had to do with the limited range of their best air-to-air fighter craft, the Bf-109.  Even operating from airfields in the Low Countries and northern France, they could only fight for about 15-20 minutes before needing to retreat and rearm.  This meant that bomber sorties often needed to operate without fighter cover, leading to large losses from local British interceptors.

However, the Germans were already working on a major improvement to the Bf-109, the series F, which had about double the range of the previous models.  A few of these craft even saw service at the end of the air battle in 1940.  I believe it is safe to presume that if the Germans had decided to redouble their efforts in 1941, they would have made it a priority to refit many of their air wings with the new craft.  This would have tilted the balance of power in the skies over southern England significantly, were the battle to be replayed in 1941.  Of course, the British were also frantically improving their designs as well, phasing out their earlier Hurricanes for the newer, more effective Spitfires.

So the success of this hypothesized 1941 Operation Sea Lion largely depends on the ability of the combined and expanded air and sea forces of the Germans to support a Channel crossing and the subsequent resupply of these forces against the determined opposition of the Royal Navy and RAF.  If the Germans were somehow able to get enough forces across the water and keep them in supply, it is fairly certain that the German army could dispatch the remnants of the British army.

Since the goal of this hypothetical is to get the USA into the war on the side of the British, we’ll presume that the Lusitania-style incident that FDR desires occurs sometime in the winter of 1940-1941.  Say December 7th, 1940, to give us the same date to live in infamy.  This gives the USA enough time to commit to the war effort but not enough time for them to deliver a sufficient amount of men and material anywhere to substantially change the outcome of any of the distant theaters of war.

We’ll also presume that the USSR and Japan declare war on the USA in response to the American declaration of war on Germany, in a way similar to how Germany declared war on the USA in support of Japan in our timeline.  Remember that in our alternate timeline, the USSR is engaged in a large-scale ground war against the British in India.  So this war makes sense for the USSR, because any American support for the British Empire against Germany is necessarily also support against the Communist offensive in the Indian subcontinent, given that British resources are somewhat fungible across the theaters of war.

OK, at this point we’ve put our thumb on the scale a little and maneuvered the USA into a notional two-on-one war that could conceivably be lost.  But it’s worth keeping in mind that, even still, we’re a very long way away from the clichéd scene where Nazi troops parade through the streets of New York City.  The oceans are vast, North America is huge, and even a USA entirely cut off from the Eurasian continent would remain the predominant industrial power.  Notably, unlike Japan and Germany, the US at this time is self-sufficient in terms of critical material for building and supplying an industrial-era war economy (e.g. food, coal, oil, rare metals, etc.).

So far, I do not think that we have strained our suspension of disbelief budget overmuch.  But, next time, we’ll see how much more we need to lean on the scale to get from here to The Man in the High Castle.   Spoiler alert: it’s probably going to be a lot.

The Great Old Ones: A Discourse on Religion

Some time ago, I was introduced to a parody of the pop love song “Hey There Delilah” entitled “Hey There Cthulhu”, originally done by a fellow named Eben Brooks.  As one might expect from the new title, our intrepid parodist replaced the words to the earnest love song with references to the Lovecraftian Mythos.  In particular, he chose to recast the song as a message of devotion from one of Cthulhu’s insane cultists to the big guy himself, capping it off with a rather creditable maniacal laugh.

Well, I was looking for the song again one day a while back and I came across the above link.  In it, a young lady with a fine sense of humor covers the parody.  But the interesting thing is that she plays it completely straight.  There’s no over the top maniacal laughter and no mugging to the audience.  She has a nice voice.  And there’s nothing about the tune that belies the effect.  The song was written as a sweet love song and in her hands, that it remains.  It’s just a sweet love song that happens to be dedicated to one of the Great Old Ones.

At the time, I thought little of it.  But in a way quite similar to that of the obsessive, introspective, neurotic intellectuals of whom Lovecraft loved to star in his macabre happenings, I found my thoughts inevitably pulled back to this amusing cover of a parody of a song I don’t particularly care for.  Why would I dwell on this so, I wondered?  What about it seemed so important?

Then, months later, it hit me.  I knew where I’d heard a song like that before: Christian Rock.  The whole idea behind the Christian Rock genre is that the moderately devout like to listen to pop music, but they don’t so much enjoy the fact that if one listens even a little, one discovers that most of the lyrics are about young people engaging in activities they consider sinful.  So instead of writing sappy love songs about other people, they write them about God.  And then, depending on the denomination, they get together to sing these new songs together at church.

This song is actually a gateway into a parallel universe.  One in which the cult of Cthulhu swept out of the deserts of the Middle East and over the globe in lieu of the followers of Jesus.  One in which the people of the Book are those that venerate the Necronomicon instead of the Bible or the Koran.  One in which the winding road of history eventually led to a girl with a guitar singing earnestly on YouTube about the long-awaited return of her beloved Cthulhu.

Having realized that, I naturally began to wonder.  What sort of world would that be?  To begin to answer that question, we can look toward the tenets of the cult as laid out by Lovecraft.  He tells us that the followers of Cthulhu are generally mad and prone to strange ecstatic orgiastic rites.  They speak a strange ancient tongue and work to wake their sleeping master, who they say speaks to them from his home that is somehow simultaneously beneath the waves and a billion light years distant.

Intriguingly, Cthulhu is not a particularly generous or caring god.  He is deeply, essentially alien to human existence.  To him, we are an extraneous background detail, and when the stars are right and R’lyeh rises to the surface of the ocean, it is written that he will consume all humanity as something of an afterthought.  The only benefits that seem to accrue to the cultists themselves are the revelry they’ll enjoy knowing the hour is at hand, followed by the assurance that they will be devoured first, quickly and comparatively painlessly.

However, if you look a little closer, you can see some surprising parallels to Christianity in there.  They both share the pattern of a god that once walked upon the earth and, for whatever reason, is yet to return.  They both agree that the final return of their god will herald the end of human existence as we know it.  And they both share the concept that, even though their god is absent now, he can still reach out and inspire his followers through dreams and visions.

It’s also worth noting that the early Christians expected their god to return real soon now.  This belief has come down to us as millenarianism, since a similar wave of enthusiasm swept Europe around 1000 AD.  It made sense to lots of people that God would come back at the millennium.  It’s a nice, even, round number after all.

But back when Jesus had just bodily ascended to heaven within living memory, it seemed quite reasonable for anyone who accepted his claim to be god that he’d be back any day now.  And when people are convinced that the world will be ending soon, they tend to stop caring about things like going to work.  After all, what does it matter if the harvest won’t come in next year if there won’t even be a next year?  And that’s a very reasonable conclusion: the only stable equilibrium in an Iterated Prisoners’ Dilemma with a known end date is immediate defection.  So the early Christians understandably acquired a reputation as a weird, nutty cult.

But God never came back.  And the people who sold their houses and quit going to work to await the blessed coming of the Lord found that this was a tremendously bad idea; a lesson that cults have continued to painstakingly relearn over the centuries.  Even still, the faith continued to grow and spread, with the believing Christians managing to grow their numbers through conversion and natural increase until they made up a majority of the population of the greatest empire the world had ever known.

Meanwhile, there was a massive tumult over the proper interpretation of the sacred texts.  The original Christians didn’t spend a whole lot of time documenting everything they were doing because, again, they’d been expecting the world to end any minute now.  So their heirs were left to struggle over countless points of doctrine that seemed to have profound implications on how God was calling his people to live.  Eventually, they hammered out something resembling a consensus about three hundred years after they reckoned Christ had walked on the earth. building a common basis by which they could then declare beliefs like millenarianism heretical.

One can fruitfully see the history of these doctrinal struggles through something resembling a Darwinian lens: variation and selection.  Regardless of what the holy books actually demand of their followers, the actual praxis of the religion needs to conform to a lasting solution in civilization configuration-space.  If it doesn’t, the followers will fail to replace themselves and eventually die out.  Therefore, we would expect the doctrines that support successful equilibria to triumph over their rivals, even if their reasoning is much more tortured than the alternatives.

There is no reason to believe that Cthulhuism would be any less subject to these pressures than Christianity.  At first, the ecstatic followers would revel in freedom from all restraint in the name of their dark lord.  But the ones that went on to win, and we’re presuming here that there is a sub-sect that does, will be the ones who somehow conclude that the correct interpretation of the Necronomicon is to perform the rites to call out to Cthulhu and patiently await the rising of R’lyeh from the depths, since no man can set the stars to rights on his own.

Fast forward a few hundred years and Cthulhuism is no longer a cult.  It’s graduated to a full-fledged religion: a belief system that can support a civilized order over an indefinite period as the dominant current of thought.

As such, in this parallel universe, it would almost certainly have a few key differences from the Christianity we know.  For instance, it’s highly unlikely Cthulhuism would put such a high emphasis on forgiveness as an ideal.  And they’d probably be a lot more interested in astronomy than the Christians historically were.

But there’d be a lot of stuff that’d be the same.  For instance, copies of the Necronomicon would certainly be held securely by the priesthood and carefully interpreted for the peasants, so that they avoided the countless heresies and madnesses that stem from staring too long into the abyss.  And history would be pocked with rebellions and peasant insurrections stemming from the wrong person getting a hold of the Necronomicon and drawing one of the many possible wrong conclusions from it.

Along those lines, I suspect that parallel-Earth would eventually have a destructive Reformation when the printing press is discovered.  From the perspective of variation and selection, the Protestant Reformation looks a lot like another Cambrian explosion in doctrinal diversity resembling that of the early church.  With the expected concomitant shakeout of the unworkable long-term doctrines that separate the cultist from the religious.

And then, finally, if you run the clock all the way up to the beginning of the 21st century, you find Cthulhuism on the back foot around the world.  By now, people have sent submarines down to explore the sea floor and didn’t find any conclusive evidence of the great city of R’lyeh.  The devout claim that’s because R’lyeh is in a dimensional pocket that is inaccessible to man until the stars are right, but that traditional explanation seems a little too pat to the men of science who have solved so many other mysteries.

Meanwhile, the Necronomicon has been subject to literary and historical deconstruction, with some people going so far as to claim that the Mad Arab Abdul Alhazred didn’t really exist.  What had been taken as the sign of his divinely inspired madness by generations of scholars was actually the result of three different original authors whose fragmented work was recombined at one of the countless early synods.

And so, if Cthulhuism is built on a tower of lies, then the critics conclude that there is no reason why people should be expected to maintain the retrograde cultural folkways.  Why should people remain chaste and honor their parents if there’s no Cthulhu out there to waken and reward their devotion with a painless end to their lingering existence?  Or why shouldn’t they just commit suicide themselves right now and cut out the middleman?

Amidst the conflict and despair, one can imagine a lower-middle class middle-aged father of two somewhere in North America.  He’s worried.  The economy never seems to be getting better, so his wages are stagnant and money is always tight.  Meanwhile, college is so expensive nowadays.  How will he manage to give his kids a better life than he had?

Speaking of his kids, his thirteen-year-old son seems to be hanging out with the wrong crowd.  And his fifteen-year-old daughter has started wearing scandalous clothing and keeps missing curfew.  He knows what they’re up to; he was a little wild himself at their age.  But he came back to the Church when he married their mom.  And he’s not sure that, at this rate, there will be anything left for them to come back to.  With those heavy thoughts weighing down his mind, he clicks the radio on and starts humming along to an old favorite.

And thus our two universes briefly intersect.

The Magic Kingdom

In his classic ’90s work, In the Beginning was the Command Line, Neal Stephenson wrote about computer operating systems as metaphors. He argued that the differences between operating systems were best seen as reflections of differences in the way that people wanted to see the world. So, for instance, he thought that millions of people were buying Windows 95 mostly because they wanted to feel like they were purchasing something of value. That they were engaging in a real business transaction like a responsible adult. Even if the product they were purchasing was really just a long string of ephemeral 1s and 0s that people down the street were giving away for free or nearly so, like BeOS or Linux.

He then went into an aside where he talked about Disney and how they are experts at what he called “mediated experience”. Being Stephenson, this aside lasted quite a while before eventually looping back around to computers and culture. But his claim that if Disney really understood what an operating system was and applied their talents to it the way they do to their amusement parks, they’d crush Microsoft in a matter of years. In the late ’90s, when Microsoft was under siege as a monopolist because their products ruled the world, this was a much more bold statement than it might seem now.

Eventually, he concluded by declaring that the world was separated into two main camps, whom he named based upon H.G. Wells’ classic The Time Machine: the Eloi and the Morlocks. However, unlike the book, Stephenson claimed the main difference between them was that the Eloi were just consumers while the Morlocks where those few who actually understood how everything worked. Thus the vast, happy flock of Eloi would use simplified graphical user interfaces to accomplish whatever they needed done without needing to deeply understand the machinations required. Meanwhile the few Morlocks would access the system using the command-line to understand and create the world the Eloi inhabit.

It’s a very hacker-centric vision of what the world is about. Which makes sense, given that Stephenson has made a pretty good career out of being a hacker prophet and popularizer. But I believe he showed great insight by recognizing that in his model virtually everyone is Eloi in regard to virtually everything important. Even the best hackers find it impossible to understand everything about everything. If nothing else, it may not be worth your time to actually put in the effort if you can consume the product of someone else’s deep understanding. And, if you think about it, this is a rephrasing of the logic behind the division of labor in the broader society.

Anyhow, I was spending some time thinking about this claim of Disney’s powers in the context of the primary subject matter of this website. If Stephenson is correct, the Disney corporation owns and operates the most potent priesthood in the history of mankind. At least, if we measure that by their ability to build metaphors that are absorbed through the carefully crafted experiences at their theme parks (and through their video offerings) that appeal strongly to a vast swath of people all around the globe.

What if the Magic Kingdom were a real kingdom? And, moreover, one that was tailored to the particular challenges of the emerging economic, military, and social realities we have previously identified? What would that look like?

To begin, I will define a new form of government that I will call “fictional monarchy”. This is an extension of the worldwide trend away from more traditional forms of monarchy and toward what we now think of as constitutional monarchy. In an absolute monarchy, the head of state and the head of government are the same person: the King. In a constitutional monarchy, the King remains head of state while the head of government is his Prime Minister. And in the more modern incarnations of the form, the identity of the Prime Minister is not at the monarch’s discretion (as one might conclude from the term “minister”) but is required to be selected by the people at large through some form of election.

In creating a fictional monarchy, we dispense with the need for the head of state to be an actual person. Instead, we declare the monarch to formally be a fictional character whose likeness and representation are owned by the polity. Which character serves as head of state rotates on an irregular but rather short-term basis. Say somewhere between one and five years. At the end of each reign, the previous character is quietly removed from office and the new one is coronated in a grand celebration.

So, for instance, say Snow White is the current ruling Princess of the Magic Kingdom. Any time there is a formal state event, like say a state dinner surrounding a treaty signing, an actress dressed up as Snow White attends in character surrounded by a royal retinue. Dignitaries both foreign and domestic refer to her as the ruler of the Magic Kingdom. And it is her signature on the treaty (as Snow White, not as her birth name) that legally binds the Magic Kingdom to whatever agreement has been arranged.

There are several key advantages to this structure over existing constitutional monarchies. First, we have openly embraced the symbolic nature of the head of state. In a constitutional monarchy, the King is still nominally in charge of everything, though in practice he is expected to defer to the Prime Minister on all important issues. But he could theoretically cause a national crisis if he attempted to seize power from the elected government. In a fictional monarchy, on the other hand, the power is vested in the character. Just like in the theme park, the actor is just one of a rotating cast of anonymous thousands who have put on the suit.

Second, since the rulers are fictional, it is not necessary to actually house royal families at large expense. Neither must a fictional monarchy suffer the indignity of tabloid journalism reporting the peccadillos of their high status royal family. Once the actor takes off the costume, the ruler just disappears until such time as the next actor inhabits the character. Lèse-majesté, or unauthorized depictions of the ruling characters, would be a correspondingly serious crime in the Magic Kingdom. Judging from the Disney Corporation’s iron grip over global copyright law, this isn’t really too far off from the present state of affairs.

Third, it is not necessary for the Magic Kingdom to invest heavily in security for any royal appearances. If a crazed gunman shoots Snow White, well, this is the Magic Kingdom. Death need not be a true obstacle. A national day of mourning is ordered and a ceremony is held, at the spectacular end of which a new actress wearing her raiment “comes to life” and declares that the magic of her loyal subjects’ love and devotion was able to conquer death itself.

Knowing this, it is possible for the Magic Kingdom to ensure that their subjects get semi-regular, intimate-feeling face time with their leaders. Instead of standing at an impersonal rope line hoping to shake the President’s hand for a second, the ruler of the Magic Kingdom could actually have a short, individual conversation with each subject and then let him take a picture with the character. In modern terms, it would be as if you were a major campaign contributor to the President’s campaign instead of a mere voter.

Economically and socially, the Magic Kingdom would be primarily focused on mediating the experiences of its subjects in a way very similar to the experience presented by modern Disney theme parks. Their major industries would be cultural production and tourism. So, in other words, they would continue to make movies, action figures, and operate theme parks for foreigners. The big difference is that the Magic Kingdom would need to be more vertically integrated and would need to apply its mediation efforts to much of its own populace as well as to the tourists.

If one looks closely at the way Disney runs their theme parks, you can see that Stephenson is correct. The whole operation is a technically-sophisticated marvel that goes to great lengths to hide the real workings from the customers. That’s what they mean by magic. If it’s working correctly, everything about the experience seems miraculous and delightful to the Eloi. Under the covers, this means that it requires a vast amount of low-paid human labor to serve as the pleasant, smiling interface between the customers and the marvels the elite Imagineers were able to create.

This business model maps very well onto the economic trends that we see growing throughout the global economy. Technological progress is concentrating economic value in a technical elite as their work replaces many existing occupations with increasingly sophisticated automation. Currently, these workers are being thrown back into the general economy and finding employment as low-level service drones, if they find any at all. These people are occasionally referred to as workers with zero marginal product. But the high-touch theme park business model can readily find profitable places for virtually everyone. Even the heavily disabled are currently put to work taking tickets and serving as greeters.

One can imagine this extending throughout the entire economy of the Kingdom through vertical integration. Begin by imagining a factory built as if it were a Disney theme park. Instead of being utilitarian and focused on low-cost output, everything about the parts where the human workers are housed is designed to make the worker’s experience one full of happiness and felt (as opposed to actual) productivity. This may seem crazy, bordering on impossible, until you realize that Disney has currently managed to make the experience of standing in line pretty entertaining.

Whatever part of the process they are necessarily involved with is sold as “handcrafted” in order to increase margins and/or written off as marketing expense. Meanwhile, any part of the factory that actually needs to run efficiently at scale in order for the overall operation to be profitable is handled by automation, well away from the common worker. Just as in the park, only a few elite Imagineers and their skilled technicians need ever actually interface with the “bare metal” of the largely autonomous factory floor.

Similar modifications to economic processes can be imagined for sectors that are not manufacturing. For instance, in housing construction, we could imagine a swarm of human workers that do the relatively less arduous customization and artistic touches. They work atop one of a scant few base cookie-cutter housing plans that are created by massive industrial processes at the behest of expert Imagineers. The end result would be a house that is equivalently expensive to an existing model in terms of material and labor. The key trade-off would be in sacrificing core customizability (in such features as room size and layout) in exchange for increased quality of the parts of the house that one interfaces with every day.

So, in essence, the Crown will be the single monopolist provider for virtually every good and service in the Kingdom. Similarly, it will be the sole monopsonist purchaser of labor. Which is what you’d expect if you assume that every subject is essentially living and working in a theme park. Therefore, one would naturally expect that this would just collapse under its own weight, in the way of all large-scale command economies. The main reason why it doesn’t is that we have no expectation of the Eloi economy being actually productive. If it actually generates value, then that’s great. But it’s mostly there to build the brand. The underlying Morlock economy, on the other hand, is plugged tightly into the overall world economy. That sector operates under the discipline of the broader international market and generates the necessary hard currency through the generation of efficient exports.

Let us assume for the moment that the Magic Kingdom’s economic model is sufficient to generate growing surplus energy while providing a good standard of living for its largely Eloi citizenry. The next question, then, is internal security. How will the Magic Kingdom maintain internal order? I suspect that the answer to this problem requires very little extension from Disney’s current population control mechanisms. As is the case today, the entire Kingdom would be routinely under complete, unobtrusive surveillance. The central control nodes for each region would be responsible for the moment-to-moment monitoring of these feeds and ensuring that key metrics of governance remain in the green.

This means that any instance of personal or property crime would be quickly identified and the perpetrators swiftly and silently apprehended. Like today, the common punishment will likely be banishment from the Kingdom for a given period, up to an indefinite blacklist. Therefore, there is no need for the Magic Kingdom to own or operate any prisons, correctional facilities, or really any overt justice system. Any malcontents can just be coldly and quietly exiled. Or killed and silently disposed of, possibly, if they violate the terms of their sentence and are caught in the Magic Kingdom while being on the proscribed list.

The Magic Kingdom has little need to conquer or occupy territory, as it makes most of its money through cultural exports and tourism. So the Kingdom’s military force will be organized solely to ensure that their territorial integrity remains unquestioned. Therefore, external security can probably be handled by drones run by the same department that handles internal security. I would presume that they would be purchased abroad, but it is certainly possible that they could be constructed largely in-house, as the Kingdom will require considerable drone expertise for its extensive surveillance systems.

So, if internal and external security appear to be solvable problems, then the final remaining issue is how the Magic Kingdom will handle intra-elite competition. The Kingdom will have little trouble handling their Imagineers, as engineers are notoriously easily led. The artists may pose more of a problem, as they are crucial to the process and often rather rebellious. But Disney seems to have been able to attract a good supply of excellent artists in recent years, so a combination of well above average pay and prestige, the threat of exile, and the Disney ideology should suffice.

The real problem is handling struggles among the “ministry”. In the current Disney corporation, the real power is held by the executives who manage the company. And the profits from the operation are returned to the shareholders via dividends. But public struggles between a board of directors and the top executives would be disastrous for public perception. As would any coup d’état attempt by the internal security forces. To all the Eloi, both foreign and domestic, it must appear as if the Kingdom is as one, united under the absolute authority of the fictional monarch. All the ministers (now-sovereign executives) merely speak on behalf of the monarch as they explain and implement her preferred policies.

There are a couple ways to manage this. The first is purely informally. In this case, real politics in the Magic Kingdom would essentially take the form of court intrigues. The churning tides of fashion, personality, and sentiment among the ruling oligarchy would raise different luminaries to power over time. But each faction would be unified in the necessity of maintaining the illusion, for fear that the entire system would collapse on their heads. The best modern examples of this sort of organization can be found in East Asia. Governments like China, Japan, and Singapore are each relatively non-ideological oligarchies run by people who have systemic stability and profit as their foremost concern.

The other possibility I envision is some manner of strict technocratic control applied reflexively to internal governance. After all, every other part of the operation of the Magic Kingdom is a data-driven, metrics-based endeavor. For instance, wait times at various attractions need to be under X minutes or some alert goes off and Operations is expected to do something to ameliorate this. One can imagine handling top-level governance similarly. Various overall efficiency and profitability metrics can be established while giving the Prime Minister full operating authority. Then, if the numbers come in low, a change in leadership is automatically triggered. If the chosen metrics are difficult or expensive to game, while simultaneously being publicly available and easily verifiable, then this can serve as a foundation for legitimacy and, thus, a Schelling point for internal power struggles.

This is less outlandish than it might seem. Robin Hanson has done some work with the theoretical basis behind a form of government he calls futarchy. This is essentially our idea for technocratic governance, but driven by a futures market instead of any particular authority. So, instead of having the system appoint a Prime Minister or CEO which goes on to make the important calls for the country or company, instead it drives each of them independently by referendum. The winning option is determined by the highest priced futures contract at some date. So, for instance, there might be three options trading on the futures market at once: A pays out proportionately to the underlying stock price (or GDP, or whatever) if policy A is adopted; B pays out if B is adopted; and C pays out only if C is adopted. The money paid for options that aren’t taken (the counterfactuals) are refunded. The idea is that the system is structured such that the price of each of these options should reflect the market’s expectation of the future stock price under each of these conditions.

If the market is liquid enough, then it is likely the best available information aggregator available, so therefore Hanson argues that it makes sense to plug decision making directly into the output of the market. Since you will maximize the company’s stock price by always picking the highest priced contract, then just do that every time and you don’t actually need a CEO to make decisions. The market does it for you. Hence, rule by futures market, or “futarchy”.

For our purposes, we don’t need to go so nearly so far. We can retain the position of Prime Minister as the supreme executive authority in the Magic Kingdom. Nor do we need to run a separate futures market in order to make decisions. Instead, it should be possible for such a relatively small, highly automated, data driven organization to decide to replace the Prime Minister based on already available metrics. We can imagine several possible mechanisms to select a replacement. But my preferred model would be to randomly choose to elevate one of the current high ministers (think Cabinet-level officials) to the big chair. Since it’s random, and assuming that there are enough high ministers in the pool, there is no particular incentive for any given subordinate to tank the metrics so as to force a change in leadership.

Either way, we do not require a bulletproof system, as none has ever existed to this point in human history. All we need for our purposes is a path to viability and the expectation that the novel model will tend toward some reasonably stable equilibrium. And for these purposes, I think the fact that either of these distinct proposed models above is likely sufficient is a good sign. The Magic Kingdom would likely be able to settle on some workable system to quietly handle behind the scenes discord among the real decision makers.

OK. So we’ve created a brand-new model that’s designed to be better adapted to the key secular economic and social trends we expect to take place in the coming decades. It might even greatly increase the average person’s reported happiness and satisfaction with his government, while addressing his growing insecurity with his place in the world. Unfortunately, it does so in a way that’s completely anathema to virtually every existing elite.

First, it is violently and comprehensively anti-liberal. From the left-liberal perspective, the Magic Kingdom is running a Panopticon-style surveillance state without even a pretense of democracy or respect for human rights. Censorship is rife. And there isn’t even a public judicial system. Even the North Koreans have judges. This state of affairs is obviously abhorrent.

Similarly, from the right-liberal perspective, the Magic Kingdom might as well be Communist. After all, virtually everyone is employed by giant, inefficient, state owned and operated industries. Plus they’re pushing a saccharine, cloying, New Age/hippie agenda that might as well be atheist. To a lot of people, “magic” will serve as a very thin-gruel replacement for a living God.

And from the more anarchist/punk/existentialist perspective, the idea that everyone is actively conspiring to maintain kayfabe on such an obvious thing like who’s in charge of the country would be maximally infuriating. There are lots of people out there who are still bitter about being lied to about Santa Claus and the Easter Bunny as kids. To people with this mindset, the fact that there is nothing even remotely authentic about the experience of life in the Magic Kingdom would be awful. After all, literally everything around is processed, sculpted, and carefully mediated to appeal to the average person. But this appeal is always safe. It’s kid-friendly. Non-threatening. All the edges are soft and rounded. It follows that anyone who seeks the truth or who has any rebellious streak in them at all will either be co-opted into the ruling Morlock elite or rapidly exiled.

So, for all these reasons and more, I have no expectation that the Magic Kingdom will declare its independence and take an equal place among the sovereign nations of the world any time soon. But I think it’s quite interesting to realize that even such a seemingly crazy idea as this probably would actually work better for more people than any extant form of social organization.  That’s a sign that the future’s going to be a wild and crazy ride.

Against Exit

I find the Neoreactionary movement intriguing.  This isn’t because I subscribe to their core tenets or consider myself part of their movement, but rather because they are the only somewhat-cohesive community out there who is actually trying to cobble together a working worldview that isn’t based upon the currently dominant state religion.  And since I anticipate an epochal shift to likely happen sometime in the reasonably short-term, as far as these things go, it is interesting to get a glimpse at what might be built to replace the existing order.  They serve as one of the very few sources out there for genuinely lateral thinking.

The neoreactionaries are a strange brew.  Their primary influences appear to be the brainier sort of white nationalists, hyper-libertarian anarcho-capitalists, and religious reactionaries.  This tripartite set of influences have very little in common save their enemies and the fact that they look back to history to resurrect vanquished social models.  Coupled with the fact that they are so deeply unfashionable that only the most contrarian sorts are attracted to their banner, it goes a good way toward explaining why their community is so fractious.

However, it is interesting to note that this strange brew of ideologies is actually pretty close to what I’d predict based upon my caste model.  In this view, the neoreactionaries are renegade priests whose allegiance is to the new order that hasn’t yet emerged.  Very few if any of them are actually representatives of potential elites in exile.  But this is why they often have to field the critique from outsiders that they dream of themselves as kings and aristocrats in the new order.  The current order is theocratic, so of course they imagine that rival priests must be seeking to elevate themselves to the apex of the new hierarchy through their argumentation.

And this is why their collective thought process divides along three main lines.  Their ethno-nationalists are priests who long to support a new aristocracy.  In particular, they seek a new order based primarily upon the military defense and naked self-interest of the nation or the folk.  So they spend much of their time and energy critiquing any present deviation from these principles as morally suspect.

Meanwhile, the traditional religious reactionaries are obviously priests of the old gods.  They want to displace the usurper and return the Old Church (whatever their preferred denomination) to its previously dominant social position.  Since they see the primary source of social decay as the pernicious influence of the new gods of Diversity, they advocate for a new regime that would vigorously reinforce the old ways, allocating prestige and economic reward accordingly.

And, finally, the anarcho-capitalists – like all committed economic libertarians – are clearly supporters of a plutocratic order.  The ones who have ended up in neoreaction appear to have largely arrived via thinkers like Hoppe, who postulated that democracy and socialism were necessarily functionally linked.  Therefore, preventing the spread of socialism and, thus, preserving merchant values at the apex of the hierarchy was only possible through non-democratic forms of government.

As an aside, I would argue that Hoppe isn’t strictly right about that.  Democracy and socialism are actually linked somewhat contingently, in that the version of socialism that triumphed over the struggles of the 20th Century happened to combine the two.  There is synergy there.  But there is also synergy between authoritarian forms of government and socialist economics.  A formal democratic system is neither necessary nor sufficient, in and of itself, for the imposition of a socialist economic model.

Regardless, essentially the only thing that all three branches of neoreaction can find themselves agreeing on is the fierce moral urgency of unfettered Exit.  This is a term of art from the libertarian ideological corpus, where they compare and contrast the effects of ability of an individual to Exit a polity with the ability of an individual to have a say in how the polity operates (what is called Voice).

Partisans of democracy believe very strongly that Voice is crucial to ensuring responsive, effective government.  This is why they’re constantly organizing what they consider to be underserved communities.  The idea is to get them to exercise their Voice to change their situation for the better.

Many libertarians argue that this exercise of Voice is actually not necessary.  Making an analogy to private business, they say that the general public traditionally has very little say in how a business goes about providing their services.  Businesses don’t have to give their customers explicit votes on how they should operate in order to be very interested in keeping their customers happy.  All that is necessary is for there to be competition.  The more easily a customer can jump ship to a rival provider, the harder each business has to work to provide services at a cost that makes customers happy.  Whereas service almost invariably sucks when there’s a monopoly provider, precisely because the customer can’t get away.

They argue that there is no reason why this logic wouldn’t apply to governments.  Giving people a majority vote over the actions of a monopoly service provider leads to worse service and higher costs than allowing customers to switch freely.  Hence, Exit trumps Voice.

One of the founders of neoreaction, a fellow with the pseudonym of Mencius Moldbug, postulated that because of this, the best system of international order would be what he termed to be the Patchwork.  In this model, the world would be subdivided into lots of very small states, each ruled by a sovereign corporation.  Each corporation would be organized internally along modern lines, with the special addition that it would have complete sovereign authority over its own patch and little to no formal influence over the internal affairs of its neighbors.  The only meta-rule would be that each state would have to allow its citizens full Exit rights.  It’s essentially the principle of the Treaty of Westphalia (cuius regio, euis religio) taken to an inviolate extreme.

The idea of implementing the Patchwork itself seems to have few adherents even among neoreactionaries.  But they see it as a vindication of the supremacy of Exit over Voice.  So they loudly cheer on secession movements, whether historical or modern, and correspondingly praise lavishly the policies of tiny city-states like Singapore and Hong Kong.  And internally, the doctrine of Exit serves to paper over the wide differences in ideals, morality, and preferred forms of social organization among the neoreactionaries.  The idea is that, come the revolution, each sub-group will be able to live in a small state tailored precisely to their preferences.

This image is aesthetically pleasing in an abstract, theoretical sort of way.  And, to be fair, it does touch on some real trends we see in the social sciences.  People around the world seem to be happiest in small ethnically and religiously homogenous states.  National governments seem to have difficulty scaling, with larger and more diverse states tending to post worse scores on international indices on metrics of efficiency in government (like perceptions of corruption).  And along with this, the trend of recent history has been toward secession movements gaining ground.  For instance, many more countries have dissolved into their constituent parts after the Cold War than have unified, whether or not the action has taken place violently or peacefully.

There’s just one obvious problem with it.  The idea of the Patchwork completely elides the actual, functional reason why states exist and persist.  The State really isn’t a sovereign corporation that exists to provide basic infrastructure services to people in a given geographical region.  It’s better seen as the most efficient, best-scaling device humans have invented thus far to organize the maximum intensity and duration of directed violence from a given set of resources.  It only provides basic infrastructure and consumer satisfaction insofar as that helps with the core violence mission.  Or, in the medium run, it dies and is replaced by a successor state that’s more focused on Job #1.

Peter Turchin’s Cilodynamics does a pretty good job of modeling expected empire size throughout history based on geography and a few long-term ideas of how empires grow, shrink, and eventually die.  He certainly gets the purpose behind the State, the stakes inherent in military clash, and the counterbalancing forces driving state size.  And it’s pretty obvious that if you boot up one of these models with a Patchwork as the initial conditions, the patches would start fighting each other.  In little time at all, the small, weak, and inefficient patches would be rolled up into larger and larger empires until the world looked much like it does now.

So it’s worth taking a moment to reflect on why the world looks like it is evolving closer to a Patchwork-style model.  It turns out the answer is pretty simple, if somewhat difficult to see clearly for ideological reasons.  The USA rules the world.  It dominates it so thoroughly, in fact, that the selection pressure on all of the various states around the world has fallen precipitously.

That’s because most states are in no danger of being overrun militarily and annexed any longer.  Border adjustments now only happen by the leave of the USA.  Consider the fate of Kuwait.  Iraq had a reasonable casus belli, by traditional standards, and they were able to swiftly conquer the nation with their far superior scale and military prowess.  In the standard cliodynamic model, this is a straightforward profitable conquest.  But in the new post Cold War era, the USA vetoed this action by crushing the Iraqi army to restore Kuwait to independence.  Clearly the rules are different now.

This reduction in external pressure has led to the disintegration of many medium-sized states.  For instance, there is no longer any reason for the Czechs and Slovaks to be united under the umbrella of Czechoslovakia if they no longer need to fear German or Polish aggression through membership in NATO and the EU.  Based on similar logic, though with much more violence, many states in and around the Middle East have been encouraged to split apart.  They have done so either de facto, with broad and extensive regional autonomy (think Iraq, Syria, Afghanistan, and Libya) or formally (South Sudan, Georgia, and Israel).

Essentially, what we see as a superficially increasing trend toward a Patchwork-like future is actually built upon the lack of true Exit.  Every polity is becoming more dependent upon the sole global sovereign as it disintegrates.  At the endpoint of this process, one can easily imagine a world made up of a thousand or more small, relatively effective, reasonably homogenous or cosmopolitan states along the lines of Denmark or Singapore.  Meanwhile, the USA continues to uphold the meta-rules around the global economy and territorial integrity for each vassal in exchange for imperial tribute, high international prestige, and substantial influence over the internal affairs of each vassal state.

If the USA were to collapse into constituent states or to withdraw from foreign affairs, as many libertarians and neoreactionaries advocate, this process would be thrown into reverse.  The cliodynamic external pressures would return with a vengeance as the newly-sovereign nations started contesting with each other and seeking control over foreign resources for economic and military purposes.

Essentially, small average state size and the possibility of Exit are mutually incompatible goals.  Sovereign borders are always enforced by the threat of war.  And war is the purpose and health of the State.

Performance Enhancing Drugs in Athletic Competition

It is a regular recurrence in the sports news that various athletes are disgraced due to their having taken drugs that were intended to improve their athletic performance (known as PEDs for short). All of the world’s most prestigious sports organizations have banned the practice. And, to enforce this ban, expensive and intrusive testing regimes have been put in place. Taking a drug like this is broadly considered among the most grievous sins a sportsman can commit, ranking in severity alongside engaging in a conspiracy to throw a contest to the opponent for gambling purposes. Accordingly, disgraced individuals are routinely retroactively stripped of their honors and chased out of their sport.

Furthermore, these drugs have been placed on the list of strongly controlled substances alongside drugs intended for more recreational purposes by most every advanced society. This means that illicitly taking a drug intended to improve one’s athletic performance can possibly get one thrown in jail and branded a felon. The basis for restricting recreational drugs is the belief (whether well-founded or not) that they are addictive and will lead the user to destroy their lives and the lives of those around them. But, to my knowledge, no one makes that claim for PEDs. At worst, some of these drugs are claimed to make a person wrathful and violent, but that’s very a different situation than the cycle of despair and degradation that people fear from the other drugs on the controlled substance lists.

So, the question arises: why are PEDs so reviled? What exactly makes them bad enough that being caught taking them is grounds for stripping an Olympian of his gold medals? Or taking away seven Tour de France titles from the greatest cyclist of his generation? Or hounding the all-time home run leader out of the game of baseball and preventing him from being enshrined in the sport’s Hall of Fame?

The answer most people would give is that it’s cheating. There were rules against it and the rules were broken. So it’s not fair to the PED-user’s competition that they were doping (the official term for using PEDs illicitly). But that’s not really enough to explain the vehemence of the reaction. After all, there are lots of other rules bound up in athletic competition that people break in an attempt to gain an edge on their opponents. To take baseball as an example, the PED-users are reviled by many of the same people who consider a pitcher openly admitting throwing illegal spitballs (a baseball with a foreign substance added to it to make it harder to hit) to be charming.

Perhaps a look at the history will help make some sense of it. The first PEDs to get popular were anabolic steroids, which when taken by otherwise healthy individuals allow for the generation of significantly increased muscle mass. These drugs are how modern bodybuilders and ’80s action stars were able to achieve physiques akin to comic book heroes. It’s worth noticing that this is a stark difference from what people were able to accomplish before these drugs came along, and has considerably changed the conception of what an athlete can and should look like.

It turns out that the primary way these steroids help build muscle is by enabling a person to recover more rapidly from working out. The user is then able to train harder; to work out more intensely. And these workouts then build unprecedented amounts of muscle. A secondary effect of this mechanism is that it prevents the user from losing as much of the benefit of their exercise routine to injury or age. The drugs help the user’s body to respond to exercise as it did in the user’s healthy youth. This is why many athletes who have been caught using PEDs have reported that they only turned to them after a catastrophic injury.

The early steroids had some nasty side effects. Users would sometimes contract strange cancers. There were fertility implications, as the hormones generated by flooding the body with testosterone-like molecules led to the body’s negative feedback loops kicking in. And people using them unwisely could develop a wildly unbalanced physique that led to unusual injuries.

But from an ethical perspective, there doesn’t seem to be any obvious difference between PEDs and any other technique used to train modern athletes. Both can require expensive equipment, provide benefits in competition, and carry with it some risk of significant injury or death. If allowed by the rules of the sport, they quickly become mandatory among anyone who wishes to compete at a high level. And, like any other training technique, PED use doesn’t directly effect what happens on the field of competition. A person taking PEDs really does run the world record track time. It physically happened; it’s not like the runner hacked the electronic stopwatch to show a faster time or something.

Along those lines, it’s also interesting to note that athletes who are willing to risk their health for the sake of victory or of their teammates are commonly lauded. For instance, in basketball a player who bravely takes a charge from a bigger rival to draw a foul is praised. Similarly, in baseball a player who constantly hustles, even on plays that have a very low chance of success, is considered worthy of special praise. In contact sports like football or rugby, playing through severe pain to earn a victory gets a player all sorts of credit from fans and opponents alike.

It seems to me like virtually everyone actively wants athletes to put their futures at risk in the search of glory. Looked at coldly, it seems pretty clear that by taking a PED, a player is risking his health to some degree to enable him to sacrifice more of his life in the gym and, thus, triumph over his opponents on game day. This decision is commonly labeled selfish and beyond the pale. At the same time, the decision to play a championship game on a broken leg or with a torn hamstring is often considered the height of heroism, even when that can turn the injury into a lifelong source of agony.

Why the difference? Looking at the pattern of the people who are upset and those who don’t seem to care so much, it seems pretty clear that the opponents’ real problem with doping doesn’t stem from some idea of harm being done to the participants. For some reason, PEDs are considered a purity violation. It’s disgusting to take steroids to win an athletic competition. And those people with a strong sense of disgust recoil from the practice, while those who put little weight on classic purity issues tend to believe that PED use should not invalidate what actually happened on the playing field. But where does this disgust come from? What’s the real purity violation?

I contemplated this problem for a long time. And I think I finally figured it out. To cop a formulation from Robin Hanson: sports aren’t really about victory; sports are about sex.

The cliche about the high school sports star pairing off with the prettiest girl on the cheerleading team isn’t incidental. Nor is the NFL quarterback ditching his actress girlfriend to marry a supermodel. Or the stories of NBA stars with thousands of conquests. In this way of thinking it’s really the entire point of the endeavor.

This also explains the great disparity between the popularity of men’s and women’s sports leagues. Victory is especially sexy to girls, so everyone cares about who the champion of the men’s league is. This drives interest, which leads to money, which leads to professionalism and a constantly improving standard of play in a virtuous cycle.

Women’s sports leagues, on the other hand, commonly find to their chagrin that their most popular players are the prettiest ones. And classical female beauty is not terribly conducive to athletic performance. So the very most popular female sports are the ones that combine artistry and grace with athletic prowess. Women’s gymnastics, figure skating, and women’s tennis are examples of the form. But unless they take careful steps to prevent it, the competition for victory will tend to bring relatively unattractive but highly skilled athletes to the fore, which will erode public interest in the league.

So the most important part of the appeal of athletic competition is that it displays the athlete’s attractiveness to onlookers of the other sex. Think of it as a demonstration of one’s genetic value as a mate. And, therefore, the most attractive sports to most people are the ones that require the athletes to demonstrate attributes that would be highly prized in the sexual marketplace. This model implies that soccer is the most popular sport worldwide in large part because success in soccer demands excellent endurance along with good body control and coordination, ability to work on a team, and individual creativity in setting up rare scoring opportunities.

Bringing this back around to the original discussion, I contend that the purity violation stems from the fact that PED-fueled accomplishment is uniquely uncorrelated with the athlete’s genetic endowment. The “natural” player who barely needs to exercise in order to triumph is looked up to as a hero. The same goes for the genetic “freak” who combines rare size and strength with extreme agility and coordination to physically dominate his opponents. Those athletes would pass down their excellent genes to strengthen future generations.

From this perspective, the PED-powered sports hero really is cheating in the most pernicious way. A man who cheats on the field of play so cleverly as to not be caught is at least demonstrating the superiority of some of his inborn attributes. In the real world, cleverness can be as valuable a trait as strength. But the PED-user is unfairly taking the adulation belonging to the true champion. And, therefore, the women who naturally flock to the victor. That is the real source of the disgust at the “unearned” strength, speed, and endurance stemming from the drugs.

The Purpose of War

In the 1980s, a term from the business world entered the public consciousness: hostile takeover.  In any other context, it would sound like a rather maladroit euphemism for a violent activity.  One can imagine a mobster congratulating his lieutenant on successfully performing a hostile takeover of 3rd Street, by which he means that all the protection money is now going to their coffers instead of their rivals.  And, in fact, that was very likely the connotation the people who coined the term were going for.

In actuality, though, a hostile takeover is a completely legal, non-violent procedure.  It’s an outgrowth of the age-old laws undergirding corporate governance.  Under these laws, the CEO of a company is technically an employee of the company.  In theory, he’s no different than the lowest member of the organization chart.  He has broad authority to exercise in how the company operates, but like any other manager, he is supposed to be held accountable by his bosses.

The catch is that the CEO is at the top of the traditional pyramid.  So who can he then report to?  The answer is found in the institution of the board of directors.  The board is elected by the shareholders in a manner provided for by the governing corporate charter.  They are not tasked with any formal power over the company’s operations.  Instead, it is their sole duty to determine who the CEO is and what his compensation structure ought to be, based upon his performance (and, theoretically, the performance of the company).

It is common nowadays that the regulatory function intended to be provided by the board of directors is inoperative.  Most board members are financiers or CEOs of other companies who don’t take their responsibilities very seriously.  In practice, it is frowned upon for board members to have significant fractions of their personal wealth tied up in stock of the company for whom they serve, because any trades of that stock will be looked at very carefully for evidence of improper insider trading.  So what usually ends up happening is that the same set of CEOs and big bankers serve on each other’s boards and routinely vote each other large raises for even average performance.  Everybody makes money; everybody wins.

A takeover of a company occurs when there is a stockholder election that replaces the board.  The new board, then, quickly ousts the incumbent CEO and replaces him with someone else.  Presumably, he then goes on to change the course of the company, implementing policies that the previous management did not want to implement.

A takeover is said to be hostile when the incumbent management and board actively resist this shift.  Generally, this rapidly becomes a very expensive proposition.  That’s because the two opposing camps are both attempting to acquire control over enough stock in the company that they can win the upcoming election.  And SEC rules force investors to make public their intention to purchase large chunks of the stock, so it’s impossible to do this by surprise.  This means that the stock price inevitably jumps rapidly, as investors that were just holding the stock passively sell out at the newly increased price to factions that are actively contesting the election.

Why did this practice become so much more common in the ’80s that the term for it entered the lexicon of the general public?  It turns out there are two sides to the answer.  The prosaic half is that a few entrepreneurial financiers discovered a new, interesting tactic.  They found out that it was possible to borrow a lot of money on the corporate bond market by issuing so called “junk bonds” (bonds promising very high yields with a correspondingly large risk of default) and then use the cash to fund a hostile takeover of a target company.  Then, once the company was in their control, its assets could be sold off and the money used to pay back the issued bonds.  Anything left over was pure profit.

Naturally, this was quite unpopular.  Breaking apart existing, seemingly-functional companies for no seemingly good reason was seen as the worst kind of short-sighted greed.  After all, anyone who worked for a company targeted for this takeover would be thrown out on the street.  And the executives of this company, who had grown accustomed to the perquisites of rule, were equally appalled by the fact that the insurgents didn’t have any desire to run the company.  They were taking it over simply to kill it and sell the corpse.  So this brought both workers and management together in their resistance to this sort of hostile takeover.

But it is in the other side of the answer that all the insight is hiding.  The above is the well-known story.  But very few people ever stopped to ask the key question: how can this possibly work?  How does it make money?  After all, the whole hostile takeover process is very expensive.  You need to raise a bunch of money at above-market rates and then use it to pay off a bunch of shareholders in the hopes that you can seize control of a company and then liquidate it.

The value of the stock is driven by the expectations of the dividend stream it pays out, which is driven by the expectations of profit the company is generating.  This is the value of the company “alive”.  The value of the corpse, on the other hand, is just the asset value the company holds: the buildings; the scrap value of the capital machinery; the office furniture; etc.

So this liquidation value has to be much greater than the expectation of profit from the target company in order to pay for the transaction costs and yield a profit.  In other words, the only way the math works out is if the target company is worth a lot more dead than alive.  The greater this disparity, the more the incentive to take over the company.

Think about what this means for a moment.  If the company is worth a lot more dead than alive, that means that the company’s assets must necessarily be badly mismanaged.  Whatever justification the firm may have for continuing to exist is pretty weak if you can make way more money by just holding a garage sale to sell off the assets.  The greater the profit our junk bond wielding hostile takeover artist makes, the greater the implicit condemnation of the previous management.  They were granted control over all of these resources, in this organization, and through the alchemy of incompetence they managed to turn the gold inputs into base lead.

Sometimes, among certain sorts of radical, the operations of nation-states are imagined as if they were corporations.  Anarcho-capitalists, in particular, like to use this language in order to strip any luster the existing terms for government might have.  Instead of the King or President, he is merely the CEO of an organization that claims to be the exclusive provider for security services and contract enforcement within a certain territory.  Others, who bring with them a deep distrust of the corporate form, like to use it to delegitimize the very idea of government (after all, they argue, it’s just another oppressive company).

Personally, I don’t find either of these lines of critique terribly interesting, in and of themselves.  The key problem is that governments aren’t classical firms in a sociological sense.  They form differently, they operate differently, and they rely on different modes of thought and organization than a typical commercial firm.  It is not useful to elide the distinction between the two forms.

However, I think there is something there worth considering in the context of the discussion on hostile takeovers.  Let’s model the government of a country as a single for-profit entity, for the moment, regardless of its internal structure.  So, for instance, the US government wouldn’t just be the federal government.  Each state and local government would count as branches of the same organization.

In this model, a government’s revenue stream is the tax revenue from the people under its jurisdiction, however collected.  For this purpose, asset forfeiture, criminal penalties and fines payable to the State, and tariffs levied on foreigners all count as taxes.  Its expenditures are the sum total of all of the operating expenditures, transfer and interest payments, and so on that the government pays out.  The difference between the two we can call operating income or profit.

Now, just like in a common company, the profit can be retained by the State or paid out to the shareholders.  The analogous operation to retaining the earnings would be to spend the windfall on extending the operations of the State or to build up savings.  Paying out to the shareholders would look something like issuing tax rebates, arranging direct payments to various constituents, or moving money to the President’s Swiss bank account.  Of course, this isn’t the way any actual government keeps their books.  But with this model we’re aiming for functionally descriptive, so that’s all right for now.

So, when put this way, it rapidly becomes clear that there’s simply a phenomenal amount of money moving through governments.  The right to collect taxes in a territory is obviously a really big deal.  And when you look at it this way, some territories are worth a lot more than others.  For instance, Greek tax revenues are notoriously poor because people often just don’t pay their taxes.  Germans, on the other hand, pride themselves on fiscal rectitude and barely have to be policed as they shovel gold into the treasury.  And countries with functioning First World economies are worth way more than your typical sub-Saharan basket case.

The analogy to a hostile takeover in this model is pretty clear: war.  The transaction cost, equivalent to the effort to buy up enough of the outstanding stock, is the total of the costs in blood and treasure to create an army to annex the target nation.  And the benefit to a successful operation is the right to the tax stream and the opportunity to reorganize the acquired territory, analogous to how a CEO can juggle around the divisions of a newly-acquired company.

Generally, the cost to conquer a country is proportional to its tax revenues.  Advanced, industrial countries can build and afford to field armies with technically sophisticated equipment.  And the larger the country, the more soldiers it can generally rely on to rally to its banner.  That’s why the USA has a dozen supercarrier battle groups and Libya has some dudes driving around in Toyota pickup trucks with .50 caliber machine guns bolted on the back.  This is analogous to the price of the stock of a company being driven primarily by its financial fundamentals.

Therefore, war is profitable in this sense for the same reasons a hostile takeover works in the world of commerce.  When a state is mismanaging a territory such that its military power is substantially beneath the expectation for its tax revenues, or its tax revenues are substantially below what they could be under better management, there is opportunity for profit.  Just as with companies, it is possible for the potential tax revenues of a country to be below their liquidation price.  In that case, the profit-maximizing thing to do is to have the army carry away all the stuff, kill or enslave the denizens, and then parcel off the real estate.  A good historical model for this is how the Romans dealt with the smaller tribes they encountered as they began their rapid expansion across the Mediterranean.

This is neat!  We are now led to the answer to the age-old question.  War: what is it good for?  To ensure the rise and spread of high-quality governance.

Without the threat of war, there is no reason for the incumbent leadership to maximize their operational efficiency.  Just like how, in the ’70s, there was no need for the executives of American firms to make their companies more efficient and maximize shareholder return.  Instead, they could redirect big chunks of money to their own perks and to generous labor deals, all the while counting on their friends on the board to reward their largess with big cash bonuses.

This also has some interesting corollaries.  As is well-known, interstate violence has gone down sharply since the Second World War.  And since the end of the Cold War, it has been the policy of the world-dominant power to not allow states of any size to grow via conquest.  New states can be created through secession, like the Czech Republic and Slovakia being created from the dissolution of the former Czechoslovakia.  This policy was the casus belli for the first Iraq War, the intervention in Yugoslavia in the ’90s, and is the basis for the guarantees being extended to Ukraine in the present crisis.

This reduction in organized violence has been lauded as a great and wonderful thing by virtually everyone who has noticed the trend.  But if the preceding argument is correct, it means that it has also greatly reduced the incentives for good government on the behalf of the rulers of virtually every nation on Earth.  This is almost certainly bad.

I believe that we can see the effects of this most clearly in the history of post-colonial Africa.  Most African states have gone through periods of hilariously bad government since the departure of the colonial powers, making even the casual cruelty of the Belgians in the Congo look tame in comparison.  Civil wars and famine are common, refugees have fled countries by the millions, and most every industry is almost entirely based upon resource extraction.  Very little value is added through manufacturing or services throughout the continent.

Traditionally, upon news of any of these governance failures getting out, foreign countries would have started circling around like hungry sharks.  A country that can’t feed itself is also one that can’t defend itself.  A country whose security forces are actively fighting against each other is ripe for division and conquest.  And once the foreign troops roll in and take over, it’s in their interest to get things running smoothly again to maximize the tax revenue.

This is the key feedback mechanism by which the quality of governance has actually gotten better over time.  States with better institutions conquer ones with worse ones and reorganize them along more efficient lines.  If the new gestalt entity cannot scale to the new size, it will then fall apart after a successful secession movement, tending to leave two or more successor states with institutions inherited from the previous conquerors.

But nowadays, that cycle is broken.  What generally happens now is that foreign governments and NGOs come in and attempt to ameliorate the suffering with direct aid.  But because they cannot (or will not) take political control over the territory first, they end up having to work with whoever happens to be standing around with the most guns.  This is why you often see news stories about warlords in Africa controlling access to food aid and crucial medicine.  From the state-as-firm financial perspective, this foreign aid is just tax revenue that isn’t a function of the native productive capacity.  In fact, it has a negative relationship, since a country gets more aid the worse the local governing bodies do their job.

Some seemingly crazy stuff follows.  For instance, it would behoove an efficient charity with an interest in improving life in Africa (like, say, the Gates Foundation) to spend its revenue on sponsoring a coup d’état in a misgoverned African nation.  Then, once the country was conquered, bring in a bunch of outside experts to put it under modern administration.  Then, use the resulting explosion in tax revenue to bootstrap a wave of conquest across sub-Saharan Africa.  In a few years, you’d have functional, self-supporting institutions everywhere.  Then the problem of providing aid would be trivial.  If you even still needed it.

The Gates Foundation could probably get around the current prohibition on states annexing other states by just not formally annexing them together into one political entity.  All the fighting could be done with special forces guys, mercenaries, and assassins instead of regular army soldiers.  As long as the same people end up calling the shots in all of the countries, it wouldn’t hurt the plan to run the local governance through proxies for legal reasons.

The real, insurmountable problem with this plan is that it’s unabashedly colonialist.  It is based entirely on the idea that Africans have, on average, worse institutions and talent for rule than the available foreign equivalents, and that therefore it is good and proper that they be ruled by foreigners.  The very idea smacks of racism and is therefore verboten in this day and age.  So we get chaos.

Until the age of war returns.

The Epistemology of Violence

From the beginning, the dream of philosophy was that mankind could use reason to determine the good, the true, and the beautiful.  From there, the wisest among us could design ways to live that would be both in accordance with the will of the gods and the true desires of man.  This is the deep reason why Plato’s The Republic is the first venerated work in Western philosophy and is still studied today.

In the West, the sciences of the natural world saw a great flourishing in the 17th Century.  Foremost among the exciting accomplishments to come out of this time was Isaac Newton’s unification of astronomy and physics.  He discovered that the motions of the heavenly bodies and earthly ones appeared to follow the same few, reasonably simple rules.  It’s hard to overestimate the effect that this had on thinking people of the age.  Suddenly, the world was amenable to reason – to what was then called natural philosophy – in a profound way.  It was seen not only as a massive achievement in its own right, but as a harbinger of things to come.

Scientific advance drove technical improvements, which in turn supported further scientific advancements, on and on in a virtuous cycle seemingly without end.  Each generation brought new and miraculous inventions.  From the philosopher’s perspective, each advance served as conclusive proof of the correctness of the overarching project.  This collective optimism about the power of reason went by the name Enlightenment; the idea being that the progress of the light of reason was illuminating all of the dark corners of the world, bringing mankind out of the shadows of fear and superstition and into the light of the glorious future.

The evidence in favor of this proposition was overwhelming.  Reason, harnessed by the medical sciences, brought vaccines that promised to eliminate the scourge of disease from the Earth.  Reason, harnessed by the social sciences, was bringing forth new forms of government and society based upon reason and principle rather than mere blood and tradition.  And reason, harnessed by the physical sciences, had reshaped the face of the world.  By the end of the 19th Century, the progress of Enlightenment had even delivered cheap, ubiquitous electric lighting.  It was now literally possible for the power of reason to light up the night sky.

From this Enlightened perspective, the 20th Century saw the disastrous reversal of all of these beneficent trends.  The birthplace of the Enlightenment convulsed in two generations of total war.  At the end of the wildly destructive conflicts, as the hundreds of millions of corpses were tallied up, it became clear that the moral and social progress generations of wise men had prided themselves had dissolved in a wave of madness.  The realization broke the spirit of two generations of philosophers, artists, and intellectuals.

Most worrisome, from a philosophical perspective, were the reverses in pure mathematics.  Math had always been seen as the most rational of all the sciences.  After all, when one works in mathematics, one is working with pure concepts without any necessary tether to the messy, real world.  Proving a mathematical theorem is something like the frictionless ideal of the application of reason to determine truth.

At the beginning of the century, David Hilbert proposed a series of twenty-three open, important problems in mathematics.  The second of these was to provide a proof of arithmetic in the language of formal systems.  In a nutshell, he wanted to generate a finite proof based on a small set of axioms that would underlay the entire mathematical project.  After all, every advance in mathematics is built upon the natural numbers and basic arithmetic.  And, as every mathematician knows, any false assumption – no matter how seemingly small – invalidates the entire proof.  The starkest madness inexorably seeps through the smallest cracks.  1 = 0; halt and catch fire!

As is famously known, it turns out that Hilbert’s quest is impossible.  It’s not just that nobody can do it, or that we just haven’t found the correct brilliant angle to the solution yet.  It is impossible.  If you take all of mathematics as an outgrowth of a single unitary theory, a man named Gödel proved this impossibility rather famously in the ’30s.

At around the same time, a similar series of realizations were sweeping physics.  Fundamental limits to knowledge were being discovered.  For instance, Heisenberg’s uncertainty principle states that it is impossible to know both a given particle’s position and momentum to an arbitrarily accurate degree.  Similarly, decay times for subatomic particles are irreducibly random.  It isn’t just a statistical heuristic that half of a sample will decay in a certain timeframe; each particle’s decay follows that same pattern.

At the end of the wars, there was only one remaining strain of the Enlightenment project that was capable of inspiring the same spirit of Enlightened optimism in the future: scientific socialism.  By far the most popular strain of this was Marxism, as embodied most prominently in the Union of Soviet Socialist Republics.  The remnants of the Enlightenment project gathered under the Marxist-Leninist banner in an attempt to drive back the forces of darkness and reclaim the future.  They failed – completely, utterly, and ruinously.

This failure was predicted by Friedrich Hayek at around the same time Heisenberg and Gödel were proving the limits to knowledge in the realms of physics and math.  Hayek warned about the calculation problem inherent in a command economy.  In his view, scientific socialism could not be sustained in the long term precisely because it was impossible for the private utility information that went into price calculations to ever be known by the central planning authority.

From this perspective, post-modernism can be seen as the response to living in a shattered, post-apocalyptic philosophical landscape.  Enlightenment failed on its own terms.  Now our best minds are left to pick through the wreckage in an attempt to cobble something together out of discarded bits and pieces of half-functional concepts.  It is quite understandable, in this light, that the modern academy is full of people gibbering incoherently as Cthulhu cultists.  What else can they do?

But, believe it or not, the situation gets worse.  It has been taken as a matter of faith for thousands of years that the truth would set us free.  Veritas vos liberabit, as it is said in Latin, and as such inscribed in stone in countless institutions dedicated to the discovery and dissemination of the truth.  But what if it doesn’t?  Where are we left, then?

As the 21st Century dawns, careful investigations into the truth have systematically undermined the very idea of the search for truth itself.  A powerful example of this is the existence of the placebo effect.  Countless studies have shown that simply giving someone a treatment for a disease that they believe will be effective but that is guaranteed to be inert (e.g. a sugar pill instead of one with any active ingredients) is almost as good as giving them a real drug.

This is commonly known.  But what’s less obvious is that in lots of cases, the placebo effect simply dwarfs the actual effect.  Sometimes, up to 90% or more of the benefit of a drug comes solely from the ritual of taking a drug one believes to be good for what ails you.  And, crucially, the strength of the placebo effect appears to be based on the degree of faith the patient has in the process.  If he knows that he’s taking a drug that doesn’t do anything, he gets much less benefit from the treatment.  And if he merely suspects that he might not be getting the real drug (like if he’s a participant in a clinical trial) that is enough to significantly dampen it.

This means that, in a real sense, all healing is actually faith healing.  If you lie to your child and tell them that there’s a treatment for their disease and they’ll be OK if they just listen to the doctor and do what he says, your child is more likely to live.  Or, put more flippantly, this implies directly that truth is bad for children and other living things.

And the effect is corrosive.  Just knowing about the placebo effect makes treatments less likely to work.  It’s like a stage magician’s performance.  Once you know how it works, the magic is gone.  All that’s left is the artistry of the illusion.

Another example of this is depression.  It’s commonly known that people in the grips of depression have very low self-esteem.  Often, when interviewed, they will report that this is because they are nothing special.  In their eyes, they believe that they are less skilled than their peers, less talented, and just generally less valuable to society than other people.

One’s first instinct is to restore their spirits by explaining to them the true value of their potential accomplishments.  Things can’t possibly be that bad.  They’re depressed, miserable, and not thinking clearly.

But, lo and behold, when people did studies to determine the degree of the effect of depression on people’s self-assessments, they discovered something terrifying.  Depressed people are, on average, accurate in their assessments of their skills, abilities, and control over their surroundings.  It’s all the psychologically healthy people that are running around with unrealistically rosy self-images!

Think about what this means for a moment.  Depressed people are self-evidently broken.  They don’t operate anywhere near full capacity toward any goal that they may claim to value.  And, often, they report that there isn’t any point to doing anything.  They very well may be right about that, too.  It’s just like with the placebo effect: too much contact with the truth breaks you.

There are countless other examples of where irrationality proves to be superior to the rational alternative.  For instance, it is well-known that in the game of Chicken, it is a winning strategy to publicly throw one’s steering wheel out the window.  Once you credibly precommit to such an irrational course of action, then a rational opponent has no alternative but to yield.  A similar effect can be had by blacking out one’s windshield: when it is obvious to both sides that one party is blind to the brinksmanship, the party in possession of more information is forced to back down.

This is not a contrived example, either.  There are lots and lots of negotiations in the real world that can be modeled in this manner.  The same logic holds in situations ranging from nuclear-powered Cold War brinksmanship to unions negotiating contracts even down to birds deciding whether or not to vigorously defend their nesting ground from an intruder.  Basically, any time there is a situation where both parties lose drastically from a confrontation, lose slightly when they mutually shy away from conflict, and win when they fight and the other backs down, you have a game of Chicken.

As deep as these cuts may seem, it’s still theoretically possible to compartmentalize.  Treat the mind like a ship: if it is composed of lots of watertight compartments, then if one is compromised the damage is isolated to the source compartment.  Perhaps there is a way forward by approaching the search for truth carefully, in regions where the effects will not be altogether too dangerous.  Perhaps, then, the techniques Orwell referred to as crimestop should be actively embraced.  Seek to not tug too hard on fraying threads that cross your vision, lest one unravel one’s only defenses from the maddening truth.

Unfortunately, the hits keep coming.  See, the Enlightenment had put a great emphasis on what came to be called the Blank Slate theory, following an old philosophical idea that a man’s beliefs and actions are largely plastic.  They can be largely shaped by the proper education, training, and experience.  Which makes a lot of intuitive and practical sense.  Philosophy, as a discipline, makes little sense if one cannot expect to convince one’s fellows of various propositions through the use of reason applied jointly to arrive at independently verifiable truth.  There’s no basis for argumentation.

But it turns out that the blank slate is essentially disproven.  Virtually everything important is heritable.  And separated-at-birth twin studies show that shared environment (read: education, especially as transmitted from parent to child) has almost no impact on any life outcomes.  Statistically, the dominant factors are genetic endowment and/or prenatal environment (which does most of the work) and non-shared environment (which is basically “everything else”).  This means that big swaths of what we’d consider disposition, character, or personality are not amenable to reasoned argument.  They simply are.

Moreover, if one follows this line of reasoning to its ultimately maddening conclusion, it becomes clear that one should not model argumentation as convergence upon independent truth.  A person is, to a first approximation, never convinced of anything philosophically important; he is merely presented with an argument that he is inherently primed to find attractive or not.  If he is predisposed to adopt the belief, he will do so.  If he is not, he will not.  And, crucially, this process has nothing whatsoever to do with the actual truth or falsity of the proposition in some reasoned sense.

This explains a lot of otherwise baffling or curious patterns.  Like, for instance, why ideas that are popular never seem to die when they are discredited.  They can be made to be unpopular or otherwise socially disfavored.  But they always seem to be lurking under the surface, just waiting to arise again.  You wouldn’t expect this in a system that’s converging on truth, no matter how messily.  Once the thesis and antithesis have been synthesized, you wouldn’t expect a whipsaw back to antithesis twenty years later.  But if people’s disposition to find an idea attractive has not changed, then what are considered the deep and eternal philosophical truths is really just a matter of fashion.

It also explains how intelligent people tend to come to genuinely believe in abstract propositions that would so happen to benefit people like themselves.  Or, more strictly, groups which they consider themselves a member.  That’s why you have engineers attracted to libertarianism, for example.  On average, they’re personally responsible, economically productive, and don’t much like other people as a species.  So it follows that they are predisposed to go for a philosophy of political economy that says that everybody would be better off if people were left to their own devices as much as possible.  Artists tend to have the opposite dispositions, so they’re more predisposed to prefer some flavor of socialism.  Argumentation matters only insofar as exposure.

I have just concluded that the conscious, reasoned search for truth is both harmful and probably impossible to boot.  In the eyes of my philosophical ancestors, I have gone mad.  Sure, it doesn’t feel like it from the inside.  But it wouldn’t, would it?

Well, when in Rome … Ia, Ia!  Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn!  And yet …

I cannot still help but think that there is hope.  Even if the human mind is not structured so as to be able to contact the truth safely, the truth exists.  There is a world outside my head.  There’s even a world outside our collective heads.  It is not as if the Enlightenment project was a complete failure, after all.  I write this essay on a computer, powered by a city-wide electrical grid, wrought by the efforts of countless men exercising their reason in a very particular way.  If nothing else, existence is a valid proof of existence itself.

In the best empirical tradition, these men tested their beliefs against the natural world through experiment.  One does not need to hold the truth in one’s head – to embrace the madness – if one is shaped by it through direct contact.  Test it and it works: this is proof enough that this small test, under these conditions, correctly represents a shard of the vast, dangerous truth.

By extension, if it lives, it lives in accordance with the truth.  If it thrives, it thrives in accordance with the truth.  And if it reproduces and spreads, it does so only in accordance with the deep, maddening truth.  In other words, the absence of any other reliable epistemology means we are necessarily thrown back on Darwinian measures to grope towards the truth.  Try everything: what doesn’t die is true.  Or, at least, true enough.

In this sense, contra the ancient tradition, force is a legitimate tactic of philosophical argumentation.  In the end, it is the final recourse.  The ability to kill a man is, ipso facto, a demonstration of better alignment with the unreasoning truth of the universe than one’s victim.  Just as Clausewitz declared that war is politics by other means, so too is war philosophy by other means.

It reminds me of the scene in The Princess Bride when the man in black presented Vizzini with the poisoned chalices.  “All right: where is the poison? The battle of wits has begun. It ends when you decide and we both drink, and find out who is right and who is dead.”  It did not matter at all what complicated chains of reasoning the glib Vizzini was able to construct.  All that mattered, in the end, was his fatal choice.

Seven Layer Model of Social Organization

Layer 7: Universal Culture (Big Religion)
Layer 6: Ethnicity/Broad Culture (Nationalism)
Layer 5: Clan (Extended Kin Group)
Layer 4: Tribe (Subculture/Monkeysphere)
Layer 3: Family (Immediate Kin Group)
Layer 2: Self (Identity/Consciousness)
Layer 1: Biology

The above is my seven layer model of social organization.  It’s supposed to be analogous to the OSI 7-layer model for network architecture.  The idea behind the network stack is that each layer is a layer of abstraction over some work to be done to move packets about.  Each layer presents an interface up and relies on functionality below in order to do its job.  Functionally, each lower layer is supposed to be ignorant of whatever is going on above, and treats the layers above it as payload.  In the real world, most of the Internet isn’t built on a pure expression of this seven layer model.  In particular, layers 5 and 6 usually get sort of compressed or ignored.  But if you’ve ever heard of “TCP”, “IP”, or “TCP/IP” in regard to network traffic, what you’re hearing is people talking about particular solutions to layers 3 (IP) and 4 (TCP) and their interplay.

Here are the basic rules to reading my presented stack layer.

  1. Each layer operates at its own level of abstraction, with its own impetus and requirements.
  2. Abstraction and scale both go up as you go up the stack.
  3. Higher layers are dependent on sufficient functioning of lower layers, but can often redirect/repurpose outputs.
  4. “Higher” need not imply “better” in this model (notably, though, “ethics” is high).
  5. Scale-up is a hard problem.  Each interface layer implies the solution to difficult scale issues.
  6. Good solutions at higher layers feed down the stack just as good solutions at lower layers ease implementation at higher layers.

OK.  Given this, let’s drill down into each of the layers and their interfaces.  Who knows, maybe we’ll find something cool?

Layer 1: Biology:  Here, I’m mostly talking about the physical needs of an individual person.  Water, food, sex, shelter … the low-level slices on Maslow’s hierarchy.  In particular, this is all the stuff that humans share with the rest of the animal kingdom.  Importantly, though, these needs have to be provided for in order for anything else at a higher abstraction layer to work.  Just as how it doesn’t matter what kind of fancy HTTP protocol you’re running if someone accidentally cuts the fiber-optic cable, it doesn’t matter what kind of society you’re trying to build if everyone is dead or entirely occupied with looking for a cave to cower in.  Presumably, non-social animals implement no layer above this one.

Layer 2: Self:  This layer is interesting.  It encompasses everything about the experience of being a person.  Consciousness, sense of self, all that jazz.  It interfaces with the physical layer in ways we still don’t entirely understand yet.  When you have the subjective experience of being tired, hungry, or cold, that’s the 1-2 interface in action.  Other subjective experiences, like the feeling of ennui, are probably pure second-layer concerns.  The kinds of fiction that are popular in university English departments are largely about gaining insight into the interior life of other people.  In this verbiage, they’re trying to describe Layer 2 implementation details.

Notably, anyone in a Robinson Crusoe type situation (a society of one) would not need to implement any layers above this one.  This is likely connected to the reason why anyone isolated from other people for too long goes crazy.  The built-in human software expects implementations of layers above this one.

Layer 3: Family:  Depending on social expectations, the extent of this layer waxes and wanes.  But, fundamentally, what defines a family group in this model is cooperation among people with the expectation that there is a close enough genetic relationship among many of the members to warrant significant altruism at the gene propagation level.

The biologist J.B.S. Haldane developed models wherein, as he quipped, he would not give his life to save his brother, but he might to save two brothers or eight first cousins.  The idea there is that since, on average, a blood sibling shares half your genes, from the perspective of gene propagation you should therefore be indifferent toward your own life or two of your siblings’.  This is the math that implies that group selection has a very weak effect at the genetic level.  A gene that imposed X fitness cost on its host but added value to those around it would need to have drastic effects in order to pay off for even somewhat closely related people.  Greater than 8X, total, if everyone helped were first cousins.  As the degree of relatedness falls, the required scope of effect for the given X increases proportionally, rapidly reaching impossibly large effects.

So, Layer 3 solutions have the great benefit of being biologically supported at the individual level.  There are strong evolved behaviors and preferences for parents to sacrifice for their children, or brothers for brothers, that can be reinforced and channeled at Layer 3.  The 2-3 interface is mostly about how behavior required of a person by their family is enforced.  Think of filial piety or an arranged marriage as examples of Layer 3 solutions that need to be enforced through the 2-3 interface.

Layer 4: Tribe: I define a tribe as a group of people that are small enough to fit into a member’s Monkeysphere, but generally larger than the family/kin group.  This means that there is sufficient mental hardware for each person to consider each other person in the tribe as a fully-realized person.  But, at the same time, there are too many people too loosely related for the genetic basis of family cooperation to be expected to kick in.  Layer 4 solutions can therefore rely on built-in human social mechanisms to smooth over disputes and maintain affiliation.  When tribes grow too large, the usual solution is fission into two or more Monkeysphere-sized subgroups.

The 3-4 interface layer is interesting.  Because the tribe layer and the family layer are so close together in size and abstraction level, oftentimes tribes will adopt the rhetoric of family to strengthen the ideological in-group loyalty bonds.  The institution of “blood brothers” or small, intensely devoted cults are dramatic examples of this pattern.  If the given Layer 4 implementation does not require (or wants to override) the Layer 3 solution, it can route around it with a direct 2-4 connection.  This can be undeniably effective in the short run.  But it is common that solutions of this sort flame out after a while.  In particular, a stable, long-running society needs to solve the problem of rearing the next generation.  And it is very difficult to get people to bear the intense personal costs of child bearing and raising without Layer 3 incentives.

Layer 4 is the top layer that was implemented in the EEA.  Implementations of Layers 3 and 4 are therefore commonly reprised throughout history.  Basically, any time you leave enough people alone for long enough, you’ll find that they’ve come up with solutions at this scale.  But any layer above this one has to be implemented purely in software, so to speak.  They aren’t natural innovations.  And, as such, they respond to different pressures and need to be maintained using different mechanisms.

Layer 5: Clan: A clan is a conglomeration of neighboring tribes that all share some common traits.  Oftentimes, this can happen after one particularly successful tribe has split several times and come to dominate a large territory.  But this isn’t strictly necessary; clan-type organizations can also be readily observed in virtual communities.  Reddit is a good example: the various sub-reddits are usually tribes of various sizes, all under the broader clan banner of Reddit.  When events occur that seem to threaten the interests of all the tribes, the resolutions they seek to their disputes occur at the clan level.

Layer 5 implementations generally require a powerful outside influence to unite the fractious tribes around their common interest.  This makes sense.  Historically, the point of a Layer 5 solution is to deal with the pressing problem of group selection.  Which is a fancy way of saying that sometimes a society is confronted with existential military threats that cannot be handled by groups that only have Layer 4 or lower solutions.  Since clans cannot rely on solely on genetic relatedness or Monkeysphere personal loyalty to enforce cooperation, another motivation is required: fear of the outgroup.

The 4-5 interface layer is commonly implemented as something of a council of tribes where each can send representatives of their interests.  Alternatively, a single tribe can be designated as the rulers (led by a king of kings).  In this case, the ruling tribe is responsible for rallying the others against an encroaching power in exchange for a greater share of the rents or spoils.

Solving social problems other than “We’re all going to die!” at Layer 5 is somewhat clunky.  This is because the 4-5 interface layer is necessarily weak.  There isn’t a lot of loyalty to call on at Layer 5 to override or restructure lower-level behaviors, because most of it is held at Layer 4 or constructed using logic that’s compelling at higher layers.  At the same time, though, the loose coupling enables targeted large-scale cooperation at inexpensive prices.  If you don’t need uniformity among the tribes, this can be a substantial virtue.

Layer 6: Nation: A nation is a synthetic, high-abstraction, fairly-cohesive identity group.  Generally, a nation is based upon some combination of idealized versions of race, language, and shared cultural artifacts that compel the loyalty of the member of the nation.  The most common Layer 6 implementations vest political power in a single entity that is supposed to govern and represent the nation.  Hence: nation-state.  This pattern has become so common that the rare modern exceptions are usually engaged in active secession movements.  Think Kurdistan, Scotland, Catalonia, Quebec, and Palestine.  Each of these places has a nation (or, at least, a large number of people claiming to speak for an authentic nation) but does not have a state to go with it.

Significant effort is required in order to foster and maintain loyalty at this layer.  Familial and tribal loyalties happen automatically; clan loyalties are weak and narrowly scoped; but national loyalty has to be consciously built and carefully maintained.  This is often done through universal schooling, sponsored veneration of particular cultural products (like flags and anthems), and careful policing of people with suspected dual loyalties.

There is an argument to be made that Layers 5 and 6 address a similar level of social organization.  Implementations at either layer result in wide-ranging polities that solve the issue of collective self-defense.  The main difference could merely be that they address the issue differently, not that they are acting at substantively different levels of scale and abstraction.  It is just that one is thin and the other is thick.

But I think the more proper analogy is to Layers 3 and 4.  Many tribes use the language and logic of family.  Similarly, many implementations of nations seek to solve the problem that clans generally solve: collective self-defense and dispute mediation among component groups.  But that doesn’t make the two adjacent layers equivalent.

The 5-6 interface layer mostly consists of bureaucratic solutions to national administration that accept the legitimacy of subsidiary Layer 5 institutions.  In an analogous way to how Layer 3 can be cut or worked around via a direct 2-4 interface, national administrations of sufficient influence can issue directives that interface directly with lower levels.  This implies the potential existence of 2-6, 3-6, and 4-6 interfaces.  Strictly, in terms of the stack, what this really means is that the intervening layers are implemented merely as pass-throughs for those functions.  But it’s important to note that there can still be possible interference from intervening layers.

Layer 7: Universal Culture: National cultures (Layer 6) seek to unify a particular group of people around abstractions that are tailored to appeal to them in particular.  Only certain people are eligible for admission into any given nation.  And, it is important to note, this eligibility is generally drawn from deep history or cultural affinity.  Logically, you are accepted into a nation only if you always were a national.

This serves as a sharp contrast to a universal culture.  A universal culture is one that everyone is in theory eligible to join.  This means that the distinction between the ingroup and the outgroup at Layer 7 is entirely a function of ideology.  Examples of universal cultures throughout history are primarily big religions (e.g. Christianity, Islam, and Communism).

And since the boundaries between the ingroup and the outgroup are malleable – it’s much easier to convert someone to Catholicism than it is to make them a Frenchman, for instance – this implies that it is at least theoretically possible to convert everyone.  And, perhaps unsurprisingly, this theme of all men eventually discovering the truth of the universal culture’s claims and pledging their loyalty accordingly is quite common.  Just as the proletariat will eventually triumph over the bourgeoisie, so will the Ummah come to encompass all of mankind.

Universal culture, therefore, is the theoretical top of the stack.  You can’t get bigger or more abstract than everyone.  This means that generally, motivations that stem from motivations of universal culture will be seen as the highest or most noble.  Conversely, there is very little inherent support for any directive stemming from this layer.  Self-interest or some rough analog can make sense at every lower layer, becoming more obviously applicable the lower you go.  But all the support for Layer 7 operations has to be done purely in software.  This makes universal culture inherently very fragile.

A society that relies on a strong Layer 7 implementation for scale will also require good implementations at the lower levels to support it.  However, at the same time, solutions that are hammered out at Layer 7 are highly portable.  Innovations in universal culture can spread like wildfire compared to tribal knowledge.

The 6-7 interface layer is invoked whenever someone is asked to choose God over country.  Or whenever national policy is affected by concerns that are seen as higher or broader than the national interest.  So, to the degree that IR realists’ projections do not map to reality, that is the 6-7 interface layer at work.

Just as with Layer 6, universal culture often has influence all the way down to the individual.  This means it can make sense, at times, to talk about 2-7, 3-7, 4-7, and 5-7 layers as well.  Whenever someone is called to apply the demands of their applicable universal culture to their responsibilities at a lower level, this is evidence of an interface.

Interestingly, though, the mediation often happens at a higher level even when it seems like it shouldn’t.  Even universalist religions that lean strongly on the idea of an individual’s immortal soul rarely attempt to make use of the 2-7 interface.  It’s just too unreliable.  Instead, in these cases, you see innovations like state-sponsored churches, which can be used to align the implementations on Layers 6 and 7.  And individual clergy usually work through a particular church franchise (Layer 4 or 5, depending on the scale) where the people are expected to break up into small groups (Layer 4).  And often the adult members of families are expected to help indoctrinate their children (Layer 3).

I’m not sure how much predictive power this model actually has.  In theory, it could be used to help intelligently craft a novel, sustainable society.  But at the very least, it seems to be reasonably descriptive.  And it’s universal enough to possibly be used to help classify and understand how any given form of social organization actually works.

Follow

Get every new post delivered to your Inbox.