How I Lost The Plot (part 1)

Shannon’s information theory and Turing’s computer set the stage for the next generation of science. Largely in ways they wouldn’t have even predicted. Recently the idea that the world is a video game or simulation has caught on as a silly meme idea. Okay, whatever, some silly philosopher or tech billionaire thinks we live in a simulation, what does that even mean?

A more interesting question is why has this idea surfaced? What is it about our knowledge of the world that has made it start to seem feasible? In the 17th century to not be a Christian was insanity. Religion was by far the most reasonable and pragmatic explanation for human existence. For fun let’s Imagine a counterfactual world: You are born and grow up on a 500×500 foot plot of land, which has all you could need. Your parents and a few other families live there as well, and they can all trace their lineage back four generations, when one day two people appeared on the ground. If you go to the edge of your plot there is a reflective mirror you can’t go past. Your great-great-great grandparents said a nice man named Steven came down one day and told them he was creating a cool new experiment to create people whom he loved.

You would have to be one hell of a person to doubt that story, it’s incredibly reasonable. It is perhaps the simplest explanation. The idea of a small plot of land with reflective mirrors sounds weird to us, so it shocks our thinking into a new frame of reference. But the truth is it’s no weirder than reality, we’re just used to the strangeness.

But perhaps in this reality thousands of plots of land exist next to each other, and the reflective walls end up 3 miles, and spores were crossing the reflective barriers and seeding new plots with new life — super weird right? That seems way less likely than Steve making us. This is why so many brilliant people never asked the question. Eventually some weirdo tries to climb the reflective barrier and measure spore content with new technology. Eventually some weirdo comes up with an idea that species change over time.

Kuhn called this a paradigm shift. In his mind a paradigm shift is some profound change that sets off a new series of scientific and societal revelations. While he thought the paradigm shift was an essential aspect of science, to me it seems more of an empirical observation. Either way, it does seem to have been the case. Before Darwin there was Galileo. He would have been the guy who proved the reflective barriers actually don’t extend upwards infinitely in my counterfactual world.

These ideas began to chip away at the mystical. When something appears so far out of our ability to explain we usually don’t just play it cool and admit we have no idea. We prefer to make up some story. Galileo and Darwin are classic examples. I suspect though little things like the germ theory of medicine played a role as well. In the past they thought the plague was due to divine providence and punishment.

The funny thing is like relics of the past, these broken ideas still infect weaker or susceptible members of our population. We have people who are against vaccination and believe evolution is evil. In fact billions of people believe a bedouin warlord from 600 AD who was violently murderous and rapacious (in his defense they all were) had his heart cleaned by the angel Gabriel in the desert, and now follow his life advice. I’m coming far closer to edgy internet atheism than I would like. Honestly religious people are some of the best I know, Alyosha Karamazov for example. Still, I think it’s a fun way to describe Islam due to the current Western obsession with acceptance.

Now Shannon and Turing come along. What if the universe can be explained by information content? What if using binary information we can create machines that compute information in highly complex and dynamic ways? If you showed Turning and Shannon World of Warcraft I bet they would have never predicted the dynamic interaction of their ideas could result in a basic simulation of reality.

Building on these ideas Kalman created a filtering device, which was based on the concept of filtering information from complex and noisy environments. Now we have adaptive neural networks. A model class that on at least a few dimensions seems a lot similar to our brain. The quantitative nerds of the 20th century had math, but their math was distinct from history and literature. It’s use in the social sciences was at times remarkable, but still distinct from the messy noise of reality.

The mathematical modelling of the 20th and 21st century the goal was for a clever human to conjure up the near inexplicable magic of his brain to solve and produce mathematical ideas that match our economic and political realities. My previous boss was an academic, and when I asked him how he came up with solutions to his stochastic differential equations he said “I look up to the sky at God, and I say ‘God, give me a solution to try.’”

We could model nuclear war using game theory, or run regressions on economic trade outcomes. Often at arbitrarily high levels of mathematical equilibrium refinement, but all the work following Nash wasn’t exactly blowing open the hatches of reality and letting us stare at the fundamental truths of human behavior. Stochastic differential games are cool, and pretty tough to crack, but as smart as those mathematicians are they weren’t shifting the paradigm.

With our new models in the 21st century we are getting closer to blowing that hatch open. Or at least we can see where the tunnel might lead. When we can create a computer that can study, classify, and interpret human social sciences better than a human brain we will begin to put the nails in the coffin of guys like Hegel and Marx. At least we can prove them wrong. We can’t really prove them wrong now, because we have no sufficiently advanced method that can bring precision to these grand overarching solution concepts that as of now only the human brain can tackle.

Philosopher David Stove wrote “For even in a single paragraph of Hegel, say, there is, presumably, not just one thing that has gone wrong, but half a dozen things which have all gone wrong together; while we are not able to identify a single one of them.”

It’s still hard to imagine what a proof of incorrectness would look like, but a computer that could scan all the information of history that operates stronger than a human brain could at least undetermine Hegel’s arguments by dynamically learning and finding counter-examples to his historiography.

Suddenly the mysticism of our brain starts to die. No longer will the brains of some revered theorist, philosopher, or economist be held up as magical black boxes of understanding. Debates on what Democracy, on what justice means, were viewed as meaningful and worth having, rather than simply our computational brains working together to filter out an abstraction that’s meaningful to us from a noisy world.

And what happens to these concepts if we kill the magic of our brains? Our brains deal in algorithms. Lots of the high-dimensional correlation and matching features of our brains far exceed the models we have. Yet machine learning models are slowly starting to overtake and match the human brain. Machines can drive a car now. Gone are the days of the mystical ability of eyes and interaction. Now we know that sight is simply light information processed by our brain. If we take that same information and codify it and have it be processed by a computer is this not, in every meaningful way, the same sight?

Our world is closer to The Matrix than our great thinkers ever understood. Instead of inexplicably sophisticated humans using language and knowledge to define human rights, government, and beauty, we are evolved computers living in a world defined only by information. If art is beautiful it is due to an evolutionary quirk in our programming. If Bach is extraordinary it’s due to his gifted and well-trained computer used to generate melodies we evolved to find beautiful. If justice and human rights are important, it’s because our experiences lead us to identifying a pattern of information that we believe improves outcomes, classifying it as justice or rights, and claiming it is important.

If this is all true, how would we view our current political system and debates? I imagine it all as complex atomistic functions (humans) interacting in a simulation. I’m hardly the first person to imagine it this way, current Political Scientists code and study simulations. Their simulations are too rudimentary to capture the complexity of the world. So we’re left with our brains, which can run bastardized simulations.

We can’t come to precise estimates, we can’t meaningful interact people in our brain in a mathematical way, yet we can consider highly-dimensional data. Our brains won’t tell us if we have enough data, and it’s hard to validate our model. One heuristic I use is for every hour of additional data I add by studying the simulation at hand, how much does my outcome change? If it changes a lot my brain hasn’t converged, if it changes little my brain is converging.

To run a proper simulation in your brain you need to be able to completely detach yourself from who you are, from your own emotions, wants, biases, and interests. This is the most fundamental reason I know of why activists cannot be strong scientists — at least not in the realm they work as an activist. They are so rooted in their conception of the world they struggle immensely to view the world through someone else’s eyes. It’s funny that cartoonists are great at it, maybe because cartoonists need to understand the world from many perspectives.

My favorite example of the activism and science dichotomy is in George Orwell’s relationship with the British far left and communist party. I mean, you had to be a socialist in Britain in the early 20th century. For essentially the first time in history there was a massive consumer surplus at the top of the social distribution, industrialization was not meaningfully improving the lives of the working class, and the idea of socialism had been born yet wasn’t tested. The main debate among the far left was to go full communist, or to try to integrate the best pieces as social policies.

This is part of why progressives drive me absolutely mad. I was talking to a friend recently about how it’s not fun bashing on Christian conservatives in the U.S. because it’s too easy. Scott Alexander wrote

[E]ven though conservatives seem to be wrong about everything, often in horrible or hateful ways, they seem like probably mostly decent people deep down, whereas I have to physically restrain myself from going on Glenn Beck style rants about how much I hate leftists and how much they are ruining everything.  Right is the New Left

His explanation though is that a small subgroup of people are interested in being contrarian, and the pendulum swings back and forth. He explains it with an analogy of cellular autonoma, it’s worth reading. It has a valid claim towards being correct, but if it’s wrong I think this is why:

Progressives essentially run our country. Their reach doesn’t run through every town, so the idea is not absolute. If you are a woman who wants reproductive health care you justly want the ideas of progressivism to exist across all towns. I might disagree with a progressive philosophy of science, but to set the stage that doesn’t imply I am somehow radically against their every goal and fight. 

Progressive ideas do control all of our top universities, our social policy, our foreign policy, our primary media, and have a strong claim on our richest, smartest, and most fashionable. Control sounds sinister, but it’s not sinister or a conspiracy, it’s simply a natural outcome that I believe is based on an idea–a model– of progressivism thought, that stretches back centuries. Moldbug calls it The Cathedral — it’s impossible to beat his explanation.

It is based on a religious belief that progress is clear and measurable, there is a set of actions to achieve it, and so long as we work towards these goals our world and country will improve year over year.

It forces you to view your opponents in one of two categories: Either they must be evil or they must be stupid. There is no alternative. If you replace the word stupid with uneducated though, it lets you feign a sort of condescending empathy. If you look at British Leave voters you can see half the articles are on racist bigots, and the other half on a campaign of misinformation and lies interacting with the uneducated masses. I used google to get those two links, there are thousands. It’s the hot thing to do to write one of two archetypal posts if you’re a smart progressive who is horribly depressed.

If the progressive classification of history is wrong, how is it wrong? And where did it go wrong? There is no reason it can’t be fixed either. We fix things on the fly all the time. I’m not going to suggest the answer is to burn it to the ground. I’m honestly not sure it can be fixed, but that’s an empirical question.


Biased Classifiers

It seems most of our wars, love, and art, can be explained by how we use our prefrontal cortex to understand our ape nature, while never truly being able to escape. It’s the cleanest definition of what it means to be human. Before that model of the world, our best classification was that we are the product of divine creation.

It’s an interesting way to understand political interaction, it’s how I see the world. There are no Manichean battles or poetic struggles. It’s a series of physical computers processing the world, relaying information to one another, and doing it all while having their preferences shaped by the the unbelievably complex yet comically base machinations of our evolutionary programming.

As computers we have evolved to absorb massive amounts of information almost effortlessly. If I showed you a matrix of 1s and 0s it would be nonsensical. If those 1s and 0s contain a pictorial information and are read through a program that mapped them to colors you could process and understand thousands of bits of information seamlessly.  

If those 1s and 0s represent a puppy, it will trigger a reaction of love and warmth, because humans who have bonded with dogs and used animals have done better than humans who didn’t. We are machines to process vast amounts of information, classify them with incredible accuracy, and respond as we were programmed.

Estimating models using numerical methods isn’t part of our evolutionary toolset, so we made computers to do it for us. Classifying dogs is part of our evolutionary toolset. At a general level it’s incorrect to say one of those is harder than others. We are all insanely complex and efficient computers.

Our puppy is an easy way of studying how we interact with information and how it makes us feel, in a relatively one-dimensional way. If we didn’t know about evolution we couldn’t explain why we loved puppies, we just would, we would feel it as self-evident. Old Yeller was obviously mans best friend.


Humans evolved tribally, and we are great at classifying tribes. I think it’s probably impossible to break free of tribal thinking. It’s such a natural and comforting way of viewing the world.

Most modern day activists are motivated by protecting who they view as their in-group, and enjoy fighting with their enemies. As in, they get a rush out of it and it’s fun. It always reminds me of this Vietnam war documentary where the director focused less on platitudes and asked the vets if they enjoyed killing. Some of them loved it, why wouldn’t they? I sometimes think it has to be one of the biggest open secrets that most humans can learn to enjoy killing.

On Reddit /r/SyrialCivilWar is also a great example, where you can choose a flag that best represents your faction. At least with regards to Syria there is no pretending. Our foreign policy seems to be built on the assumption “Can you guys just act like you aren’t strictly loyal to your tribal culture for a second and see where that takes us?” Good luck with that. McChrystal revolutionized our anti-terror force in Iraq when he realized the key to intelligence was to work within the tribal patchwork.

Part of our problem is that humans didn’t evolve to belong to tribes of millions of people, spread out across space, connected through networks. What do you see when you look at /r/SyrianCivilWar? Misguided humans? Freedom fighters vs. evil? Or a bunch of broken robots trying to classify and optimize their world using out of date software?

They remind me more of machine learning models I optimize. I mean, for goodness sake, humans for rational reasons once believed sacrificing other humans on top of pyramids made logical sense (it was rational because there was no reason not to expect that it would work. After all, if your gods can make magma spew from the earth–or whatever–why wouldn’t sacrifice make crops grow?). You can swing on the branches of technology and scientific advances others made, but for the most part most human thinking is no better than what we see as the idiocy of the past. The only difference is they get to be constrained by the scientific progress of humanity.

We have convinced ourselves that our modern domestic policy is classified by !science (opposed to science). The truth is that it’s the other way around. All research into sociological or political statistics has to be motivated by a human’s inference of the world. Most of the time this results in strangely biased results.

Andrew Gelman notes two studies on racial bias in police shootings that come to different results. How is this even possible? Isn’t science foolproof? What turned this science into !science? It has to do with human failures more than any numerical failures.

  1. A human perceives the world and forms a hunch
  2. A human uses their knowledge of information to collect data that they believe, based on their human classifications and judgement, is an accurate representation of the world.
  3. A human uses classifications of various model types, and their historical successes with other data, to decide which one to use.
  4. A human runs a series of models, tests, and validations, that are hidden to the reviewer. (these errors are well documented as Bonferroni correction and the garden of forking paths)
  5. At each stage a human analyzes the results, and in his or her mind simulates the higher dimensional nature of reality it might be reflecting. From an excerpt in Gelman’s point “This is not to say we can’t cook up a plausible sounding story to support this result. For example, officers may let their guard down against white suspects, and then, whoops, too late! Now the gun is the only option.” Coming up with what sounds ‘plausible’ based on parameter estimates is par for the course.
  6. Choosing the right data, model, and ‘story’ to present the results.


You all already knew this. I mean you might not have been able to break it down, but you know something wasn’t right.

Each tribe still has enough fuzzy information to retain their claim to righteousness. Fuzzy information is what I call the class of knowledge that is clear enough that it clearly exists to any one of us, yet cannot be codified into any sort of !science model.

For the example from Gelman’s page, it has to do with the worst police encounters blacks have that don’t result in a shooting. That’s fuzzy information, it doesn’t map to any discrete data set. It’s not a clear enough event that it can grab mainstream attention. It is filtered as common knowledge among progressive and black communities.

The numbers are not great, but from wiki about 1,000 Americans are shot by police per year. The difference in the number of black people shot, compared to how many ‘should be shot’ based on a proportional representation is a couple hundred. That’s about how many babies drown to death in pools per year. We’re a country of 320 million.

The devil is in the fuzzy information as it exists within tribal knowledge. You won’t find it in any statistics. It’s a classification issue we struggle to estimate ourselves and communicate. Those who don’t belong to your in-group, who don’t have access or trust in your tribal information, simply won’t believe it. And, sure, they could read James Baldwin or testimonials. But that’s anecdotal evidence, which isn’t !science. Seriously though, that stuff is manufactured evidence, which humans instinctively seem to mistrust if it’s produced from an out-group. It’s not the same as experiencing it yourself, organically, straight from the population distribution.

Still, it is true that issues regarding racial injustice are very widely discussed. It has been a core issue for our country politically for centuries, and is currently the central platform of the progressive party. Caring about these issues is very fashionable, and it signals you’re part of the progressive group for justice. It’s your way of saying you share their classification of the world.

The danger with fuzzy information though is that it is by definition hard to measure, it’s hard know how representative of the population it is in countries of hundreds of millions, and most of all it interacts with tribes and in-groups in strange and nonlinear ways.

One way to think of this is that every year a few hundred kids drown to death and a few thousand have their limbs amputated by lawnmowers in the US. Now, if this exclusively happened to a group with a shared identity, it might become flagged as a group-issue. As of now it’s, for the most part, distributed across tribal groups. Plus, it’s not human violence, so it’s an event distributed across all identities, without a clear antagonist.

You know though that when these issues correlate with one group, and then have an out-group that is perceived as causing them, it blows up into a serious issue. And due to the fuzzy nature of the information, each group, and all outsiders, get a different picture of reality. Whether you know it or not, you are likely associated with a certain set of groups. Based on your political and group identity you will then choose a certain set of news outlets. These groups have a model of the world built around fuzzy information and its interaction with your groups.

Another way I like to think of this is that almost nothing we do politically suggests that we are good people. The causes we pick to fight over have some of the lowest effort to reward ratios, and our end goal always seems to involve amassing power. Our best and brightest don’t go to D.C. because of their love for public service  They go there for power in its rawest most uncut form. Our smartest and most ambitious don’t take a paycut because they love their fellow citizens. Do you really think followers of Alinsky’s Rules for Radicals have done more good (any good?) than, I don’t know, Effective Altruists?

It feels stupid to even write, because it’s so obvious: If you want to alleviate suffering, donate your time, money, or expertise, to those in need. If private citizens gave up their right to political speech and voting for a year in exchange for a $10 donation to Doctors Without Borders, the world would be a measurably better place. Political outrage doesn’t save lives. A few dollars worth of medicine saves lives.

Paying smart students to teach urban, black, kids can help the world. In a world where every family with the means donated $1,000 a year to private sector urban development charities multiple organizations with the same organizational efficiency as the Bill and Melinda Gates foundation would form, poaching the best and most talented teachers from Berkeley, paying them $90k-120k a year to be competitive with Facebook, and have them go actually teach kids shit.

If you never wrote about how stupid or bad politicians are again, but donated a few dollars, you’d make the world a better place. We all would. If we all began donating in non-trivial amounts our entire industry structure would shift to create a robust system that hires top talent to alleviate real, severe, suffering in our country and out of our country. None of us do this. Political outrage isn’t about alleviating suffering or helping people. It’s about the facade of feeling good, sweet, just, and true, while doing what we really crave: Fighting and struggling over power. As I said, it’s stupid to even write because it’s so obvious.

I guess the most potent counter-argument would be that if our side won we could use the power of governmental decree and taxation to outright solve the issue. Not only that, but if their side wins, the consequences are too horrific to consider. For example, we had incredible activists in Seattle such as Anna Strong. In a book she wrote in the 1930s on the Soviet collective farms, she marvels at their efficiency and the government’s ability to collectively solve such tremendous and awful market failures that plagued her (awful) country, the United States.

She fought to gain power for what she believed was good, sweet, and true. She was an activist. With updated knowledge she seems like an awful person. We should be thankful modern progressive activists in the United States would never make such awful mistakes.


Let’s take a trip down to the darkside of fuzzy information. Can we undermine the worldview we grew up with? I know the model of fuzzy information I received throughout my education was pretty tilted. How dark can it get? And what should our response be to fuzzy information that chills us to the bone and counters everything we believe is good, sweet, and true? The question you have to ask yourself is the following: Is this fuzzy information not mainstream because it’s not representative of reality, and your model of the world is? Or can you find clear criteria why your model is correct and this alternative one is wrong?

It’s hard to think through this stuff clearly. Sometimes when I read particularly reactionary or racial blogs I get chills: Evil is lurking. It seems like a catch-22 sometimes. If I read this stuff with an open mind I need to be willing to accept it as true. But the only people who accept this stuff as true are pretty hateful people. I’m not hateful, therefore I can’t accept it. But it’s important to have an open mind…

The Gates of Vienna documents in detail the dark fuzzy information regarding Islam and immigration. Read through some of the articles, focus on the ones that translate or link from local news sources. In this post you can read about a refugee who raped a women, and was sentenced to only a year in jail. The court also determined he was too young to be deported. Does one example make a case? Of course not. It’s hard to measure all this stuff, and even if it wasn’t, many countries either choose not to for ideological reasons or are too bureaucratic. Still, it’s strange how certain cases involving rape or sexual assault always seem to hit the fashionable news circles, whereas others never provoke the progressive outrage machine. Why?

A structured question to ask is are the refugees raping women at a higher ratio than current men, right? Well, it’s interesting. But what would the answer tell us? Let’s suppose for a moment, contrary to fuzzy evidence, that they are raping women with the same likelihood. Does that mean immigration is acceptable? The problem is that they are raping women who are part of another tribe, which for that tribe is worse. Why is it worse? Well as a tribe we know bad things go on within our tribe. That’s a constant. American blacks know the state of affairs within their communities. Ethnic Europeans know rapists exist within their groups.

When an outsider comes and attacks, your evolutionary brain starts screaming “ENEMY AT THE GATES.” Is this rational? Well, we are biological machines with preprogrammed subroutines, so if we instinctively perceive it as worse, that’s the same thing as it being worse. On the other hand, we have the capacity to incorporate information and, to an extent, override our preprogrammed subroutines. The tension between this conflict is incredible. Activism at its most successful identifies these base tensions that can be overridden by sufficient education. Scott Alexander makes the case nicely on the positives of an optimal (he calls it universal) culture:

On the one hand, universal culture is objectively better. Its science is more correct, its economy will grow faster, its soft drinks are more refreshing, its political systems are (necessarily) freer, and it is (in a certain specific sense) what everybody would select if given a free choice. It also seems morally better. The Tibetans did gouge out the eyes of would-be-runaway serfs. I realize the circularity of saying that universal culture is objectively morally better based on it seeming so to me, a universal culture member – but I am prepared to suspend that paradox in favor of not wanting people’s eyes gouged out for resisting slavery. 

Where does that place our current generation of activists? I’m honestly not completely sure, and it feels like a prisoner’s dilemma. If the other side treats the world as tribal, your best bet is to act tribal as well because you can’t expect them to have your interests at heart. However, it would be better for both groups to eschew tribalism or ethnic nationalism in favor of a common interest.

What’s the answer then? The problem is that it’s basically impossible to independently verify any of them. Remember the adorable picture of the dog? It’s consists of thousands of bits of information. Far more information than many modern datasets used for statistical analysis. Our brains can absorb and classify that information at a level both science (and !science) cannot hope to achieve. Those tribal subroutines ensure those vast amounts of information are distorted and biased in ways we can barely tell, but which surely obfuscate reality.

Of course, understanding this is the first step towards not being unaware of the world.


The angel in all humans still exists. Or at least I choose to conjecture that it does. If you push towards the robotic view than killing a human is nothing more than breaking a very complicated computer well… it depresses me to imagine. Thankfully, we all evolved to experience compassion.

Human rights is an exercise between optimizing the world we want, and its complex interaction with the angel buried in our processing algorithms.

When you look at a puppy you feel warm. When you form a tribal relationship you feel good. For humans associating with a tribe improved your chances of survival, your evolutionary subroutines buried deep in your brain is rewarding you for the smart choice of associating with a team: Good job!

My goal is to interface with reality. That’s what I do for fun. I imagine the world as an information rich matrix, and want to identify and ignore the evolutionary subroutines that are biasing my classification algorithms.

What would a biased algorithm look like? Probably people joining tribes and downplaying the downsides of their tribe while pointing out the flaws in their opponents. That’s the default. It’s what I used to do. I remember the president of my old university was a Mormon. That meant he donated 10% of his money to the Mormon church who in turn donated money to causes against gay marriage. This was unacceptable to me, he was an enemy donating to enemy causes. I felt the righteousness flow through my blood as I posted this “controversial” topic on Facebook, where I would gain the support of my entire social circle.

Today we see #blacklivesmatter vs. #alllivesmatter. It’s not the first of the hashtag wars, but it may be the most retarded. One tribe views the world as consisting of oppressive structures that harm ethnic minorities–mainly American blacks–and advocates for a combination of progressive equality dynamics and one-sided ethnic nationalism at the solution. The other side views long run success as derived from the individual, and sees ethnic nationalism as never acceptable, even if tribal-ethnic outcomes are correlated with individual success (in a way that conveniently works out to their favor).

Whew. It’s a clusterfuck of blown out biased classifications. I could try to go into it a little more, but really.

Not to suggest that both sides are wrong and the truth is in the middle. Rather than both sides are clearly modelling the world tribally, which may still be true, but we have no reason to expect it to be correlated with reality.

The way in which was classify other humans and groups is essential to our survival, and our own survival doesn’t at all have to be correlated with, you know, reality. So is it any surprise that at its most base level it comes down to ‘enemy’ or ‘friend’? And maybe this is what Moldbug so deeply loves in Thomas Carlyle: A natural order removes our need for constantly classifying one another as friend or enemy, it is formalized!

In Carlyle’s world when the right formalization is set, we drop our tribal hangups. I suppose if we don’t the government would hang us until we do. Churchill had more success squashing insurgencies than the modern day occupying forces. Humans love fighting, and they love their tribe. A system where that’s forcibly removed could be one where we all get along, or it could be a terrifying dystopia of public hangings and curfews.


Filtered Worlds



If you want to understand modern politics scientifically you have to understand the separate model classes that we use to understand the world, and how our brain fits into it all. Our brains aren’t mystical, they are flawed in understandable ways. Being part of a political party or movement is euphoric. There is the base excitement of belonging and fighting. Even if you think–or are–fighting a just battle, you’re still fighting for power to impose your model of the world into action and law.

The stronger these feelings the more conviction you will develop in your view of the world. You begin to see people on the other side as misguided and wrong, and yourself as smarter for seeing the truth. Your brain is a mushy algorithm that was not designed to understand the world, but to ensure your survival and defeat your enemies. At every turn it will trip and break you.

A reasoned discussion involves understanding another person’s model of the world and the data they are using.  If you don’t know these things you can’t know they are wrong. That’s not as much fun though as trying to systematically dismantle someone’s argument and enjoying the feeling of showing that they are wrong.
Part I.

This difference between the legal scholar and the engineer is recognized but poorly understood. One deals with more complex arguments, the other with math (or whatever). I view this difference as one based on how many dimensions of data we are dealing with, and the algorithms we used to solve the problem. It’s an oversimplification, but a useful one. Our brains run machine learning algorithms, everything we think or do starts by running an algorithmic routine in our brain. On one end of the spectrum, let’s call it the left, we have incredibly high-dimensional fields: such as Legal theory, political theory, and business analytics. On the right we have lower-dimensional fields: such as physics, chemistry, and programming.

Without obsessing over how we transitively list fields of study, this will make sense to you generally. On the far left we have a very challenging time uncovering elegant low-dimensional abstractions to our questions. This means there aren’t obvious logical or mathematical models we can use to solve the questions of legal theory or history. Instead we are stuck with trying to use our brains algorithms to learn and filter out causality from high-dimensional correlation and noise.

On the far right we still start with our brains algorithms (no escaping that), but we try to filter out a clear abstraction that generalizes the problem. No matter what you are trying to code, and in what language you choose, the abstraction of a ‘loop’ remains the same.

This classification of fields of study using high-dimensions ends up being one where we exclusively use our brains. We can still refine our brain algorithms, train them on some sort of general framework (e.g. political theories), and learn and observe more about the world. But in the end the analysis comes from our brain. The fields of study using lower-dimensions get their answers from a mathematical model or computer. Of course, the vast majority of analysis takes place using a combination of these two. This is where we try to map our knowledge of the high-dimensional problems of the world to lower-dimensions, and use math or a computer to study them in ways our brains are incapable.

The purpose of all models of the world is to find signals from within noise in a way that makes sense to our brains. We can imagine our world as being mapped to an n-dimensional matrix with perfect fidelity, such as the main terminal of the simulation of our world, or our world as known by God. As we try to understand the world the first step is that we need to observe and document information we observe in reality. This is done using a combination of neurons in our brain, which form patterns of synapses, and electronic switches that store bits of information on silicon.

Due to evolution it appears our brains are pretty good at scanning the world and deciding what’s worth gathering more information on. For example, only recently has AI beat a computer at GO, which deals with this exact issue of trying to decide what to gain information on when it is intractable to study all possible choices and outcomes.

Within these large amounts of information we can focus on causality or classification. I imagine all classes of models are relationships between information, patterns, and uncertainty, as they would exist in ‘God’s Mainframe.’

The difference between causality and classification has more to do with what makes sense to our brains than a profound difference in a model’s relationship to information. Another way to say this is causal models and classification models seem like they might eventually be the same asymptotically if we assume they progress towards perfectly explaining the universe.  I expect this to be true because all of the causal models we have created were born from the learning algorithms of our brain, so it might be true by construction.

An example is in modern medicine. Physicians often don’t understand the cause, but they are able to place symptoms in a class and match it to a treatment. In this task the physician is using a learning algorithm in their brain. For example, to classify and treat many diseases there is no need to understand the chemical mechanisms. Over an infinite time-frame a learning model that tries to classify symptoms and then tests out a random treatment will eventually perfectly treat all disease. A more sophisticated classification algorithm that we haven’t yet invented would learn at a more granular level the biochemical processes related to a given disease in the human body, and instead of iterating would exploit the structure of the universe to more rapidly converge on an answer. The end result would be a learning algorithm that has ‘learned’ the parameters and mathematical structure that matches the rules of our universe. We consider this understanding the causal reasons.

In this sense causality is often related to a more granular understanding of structural mechanisms rather than simply observing at a higher-level and classifying. It’s a meaningful distinction to our brains. For the purposes of distinguishing between model classes today, splitting them up between causal and classification makes sense. While someday they will converge, at this point they are markedly different. We’ll explore how.

We can view current scientific inference as falling somewhere in this 2-dimensional world. This model of scientific models, like most models, is useful but simplified. High-dimensional (M) causality is mostly beyond current human brain capacity. It also frequently has the most interesting questions. For example: What caused the US Civil War? Not knowing the answers to very important and historically crucial questions is scary, so while our answers to these questions might be pretty good, we place more confidence in them than we should. In God’s mainframe these take the pattern of massive correlations and linkages across large dimensions and over time. The answers to these questions hope to pin down a trend of immense information by finding dimensions that explain lots of what we see. This would be like a principal component analysis of God’s mainframe.

Economists often try to build the answers to these questions with a series of low-dimensional (N) causal arguments. These arguments can be small mathematical models about why socialism doesn’t maximize consumer surplus. Even further down in the bottom left we have physics models. These perfectly explain phenomena in God’s mainframe. They say along some dimension every single time you see 10010101 you will then get 0010.

Classifying low-dimensional models is less common, but we see it to an extent in medicine. You receive a rash, your doctor classifies it and matches it to an ointment. In many cases no one has any clue how or why that rash was caused.

Learning methods are instead concerned with classifying or predicting in a high-dimensional space. This is the class of model that will detect your face in certain images. This isn’t a beautiful set of mathematical equations that detect each feature separately (I don’t even know if such a thing could exist), but it creates a set of ‘neurons,’ essentially binary nodes that are on or off, which when combined in the hundreds can find the pattern of your face. I can’t point to any node and tell you what it’s focusing on any more than I could blindfold you and have you tell me why you are able to detect your nose in a picture (because it looks like how it is). While scientists use the learning algorithms in their brain to search for causality, the aspirational goal is the final result is a detection of causality and separate from the methods used to arrive at that point. This isn’t always how it goes, but it’s the goal.

What’s interesting is machine learning has much more in common with human intuition than traditional models used for causality. You observe the world through your sensory information (most of this comes through your computer screen). This works for extremely high dimensional data and data-types that are very very challenging to compare numerically. Causal studies try to identify clear links between lower-dimensional datasets, with the goal of having higher precision and telling a story that answers why events happened a certain way.

Part Two:

Let’s say you think Bernie Sanders is best. You don’t know much about what makes a president great, after all, you’ve never met any. You don’t know why a friendly demeanor is a good thing for a leader to have, other than it seems like a good thing, and leaders should do good things. You don’t know exactly how or why banking institutions work, but bad things are often associated with them, and usually bad things come from bad people. Or maybe you do know all these things, and you think Sanders is best. I know a lot more than most Sanders supporters on the type of outcome we can expect from his political and economic policies and disagree with them. On the other hand, one of my brightest friends who is writing his dissertation on campaign finance believes voting for Sanders as a one-off event is crucial towards ending this issue. It’s strange then that I have this friend who is very smart, and dedicates all his time to researching this issue, and has determined he’s voting for Sanders. I don’t know nearly as much as him, but I won’t vote for Sanders.

The point is, your brain is learning and classifying, but only impresses upon you the illusion of causal knowledge. Our confidence in our causal knowledge of the world is often not correlated with how much we actually know. In my experience we are awful at approximating what we don’t know.

You are just basing whatever you think on the machine learning algorithms running in your brain. Maybe they are great algorithms due to the genetics, maybe they are great because you’ve trained them at school by having a professor showing you the outcome and result of different cognitive biases. Our brains are great at making certain mistakes, like sunk cost fallacy errors.

While I’ve made distinction between classification and causal thinking, we do switch between them seamlessly. Humans have the ability to just follow their own base algorithm, or try to interact it with knowledge of the scientific method and work on teasing out causality. For example, consider drug policy: There are many medical professionals, practicing lawyers, and justice scholars, who have devoted their life to studying and reading issues involving illegal drugs, studying the evidence of criminal policy, and thinking of effective ways to deal with these issues. Most of these folks have opinions on drug crime law. Most people without any of this knowledge also have opinions on drug crime law: Isn’t that strange?

With almost no evidence, data, or understanding, wouldn’t it make sense to just admit you aren’t informed and scale back the confidence you have in your opinion? But the human brain algorithms always want to spit out an answer, and they didn’t evolve to be calibrated to tell you that you lack information to make a choice. Damnit, choices need to be made. Do we defend from the other tribes attack or run? Do we lock up drug users for life or rehabilitate them? No time to think: We need the answer now! The truth is, all evidence so far suggests complete legalization of all drugs would be optimal. There is at least enough evidence to justify trying it in some trial runs. It’s not happening anytime soon. You might even think this sounds insane, even though you probably don’t know much about the topic. What do you know anyway?

What is even more disturbing is that feeling of hot blood and excitement when your side wins. When a plane crashes, when there is a shooting, when a police kills someone. It’s an open secret that everyone is secretly hoping it justifies the platform of their side. Another datapoint to prove you were right. We build a model of the world, and we grow attached to that model. We eventually share that model with others, which often leads to a political in-group. That in-group wants to be right, most of all it wants power. So it scans the world for data points that justify the model and that will result in power. This is what it is to have and use a human brain. I can’t avoid feeling it, it’s too base, but I can identify and acknowledge it.

If we had to reason why this has happened, it would be that our base algorithm had little evolutionary reason to develop reverence for the complexity of truth. But as a consolation prize our prefrontal cortex has enough wiggle room to let us develop this in ourselves through training, and building upon the work of great humans before us. We can learn this on our own, but we usually don’t start with it.

So we now have established that even without knowledge of the scientific method or evidence our brain algorithms will generate views on the world. This category includes very well educated people in fields they aren’t familiar with, as well as everyone else. Sometime these people are obvious in their obsession with conspiracy theories and faked moon landings. Sometimes it’s the naturophathic healing section at whole foods. Sometimes it’s a true belief in the cause of the Civil War. Sometimes it’s a view with strong conviction on the benefits of rent control. Sometimes it’s a view on the role and benefit of government economic intervention. Fields of immense complexity distilled to simple beliefs held with conviction. Don’t worry though, here at Schools and Thought we aren’t going to throw our hands in the air, like Tolstoy, and give up. Instead we need to figure out a few things: 1.) What can we know? 2.) How can we know when we can’t know something? And 3.) What methods can we use to answer these first two questions?
Part Three:

What if we can’t hack away a causal argument though, and are still stuck in a high-dimensional world with an overwhelming amount of information and association? Using these associations to loosely understand the world isn’t considered scientific knowledge. For example, most professors of social sciences have active political views. Despite this, academic journals in the respectable social sciences rarely ever endorse a politician or controversial policy. These academics instead write about this on Facebook or personal blogposts. Why is this? Because these issues are too undefined and complex to nail into a scientific argument that focuses on a few variables and causal inference.

Instead they require hundreds of variables, their associations, and their interactions, which is something a brilliant well-read human can model in his head and write out, but rarely prove by conventional standards or within a single academic article. I mean, there isn’t even a p-value.

Some social scientists do focus on high information questions, where they study and question the abstractions and how they interact with a certain history of time-period. I wrote my master’s thesis on the political strategy behind the economic research on the Smoot-Hawley tariff. This was a Tariff act in the U.S. in the late 1920s. The economic abstractions were essentially that the Smoot-Hawley tariff was bad because trade restrictions lessen trade, which generate wealth. My argument was that this is true, but the trade restrictions were useful for the Republican party based on a detailed study of the votes in the Senate, political coalitions, and special interest groups. I dove into congressional testimonials, census data, and political pamphlets that other authors likely hadn’t read. There is too much information for anyone to read. Plus, while Smoot-Hawley is interesting, it’s not as though legions of researchers are studying it anymore. It’s not that hard to pick out a few texts no one else read and build an argument off that. Although maybe I’d have come to a different conclusion if, by random chance, I’d selected a different set of congressional records and political papers.

Another example is Winston Churchill and his WWII strategy. It’s established we were the ‘good guys’ in WWII. No matter what we did it was at least an order of magnitude less evil than the calculated murder of the Nazis. Not to mention communist Russia under Stalin, who we condemn (but not as harshly as  Hitler). Although in terms of magnitude of evil it’s unclear whether Russia was any ‘better’ or ‘worse’ than Germany. Although magnitude of death might be the wrong measure, as Primo Levie writes “In this lugubrious comparison between two models of hell, I must also add the fact that one entered the German camps, in general, never to emerge. No outcome but death was foreseen. In the Soviet camps, a possible limit to incarceration always existed. In Stalin’s day many of the “guilty” were given terribly long sentences (as much as 15 or 20 years), but hope of freedom, however faint, remained.”

Churchill personally pushed for bombing campaigns of German residential cities. The goal was to break their spirit to fight by burning them alive. Imagine an alternate world where the allied forces had occupied Dresden and re-purposed a death camp. They then marched tens of thousands of Dresden women, children, elderly, sick, and men, into the camp to be incinerated. Let’s be perfectly clear, that would have been the same core outcome (psychologically different). It’s not as though Britain didn’t have a precedent for this sort of stuff in Iraq in the 1930s. As Philip S. Mumford said, a former British officer in Iraq to left the force, “What is the difference between throwing 500 babies into a fire and throwing fire from aeroplanes on 500 babies? There is none.” Is it okay to incinerate civilians in total war? Sorry, what’s total war again? Were the children of Dresden going to school during the day and gassing Jews in Auschwitz by night? Is Trump the radical for suggesting we go after terrorists families? Or is it a radical idea that we don’t go after their families (we don’t target them now, but we kill a lot of them by accident, which is observationally equivalent).

The conclusion behind these two examples is that there is a lot of information to consider, perhaps too much. Yet most of us hold views of the world despite knowing that if we studied a topic in more depth our view of the world would change. We delude ourselves into thinking we have a well defined and complete enough understanding of the world, history, WWII, and human suffering to write our moral score-card for the 20th century and policy. A score card we all use weekly to evaluate our politicians and government. We trust that the hundreds of Civilians who have died from Obama’s drone strikes were necessary deaths. He’s still our cool fashionable president.

This post isn’t on WWII or drone strikes; we could go far deeper down those rabbit hole. What it’s really about is undetermining our political score-card..

Undeterminism is the concept that another model, which is much different, could generate the same outcomes as the one we are thinking about. This is a link worth reading in full, and the fact that it’s not taught as the base of a university education is tragic (I know everyone says that about their personal interest, but teaching the foundation of science has precedent in my mind).

I’ve been reading Curtis Yarvin and Scott Alexander’s blogs lately. Curtis Yarvin is the neoreactionary of the 21st century. Scott Alexander is an anti-neoreactionary who takes their arguments seriously. This isn’t a post on neoreaction, but neoreaction is a view of the world that profoundly challenges the progressive views on Democracy, colonialism, revolution, fascism, and progressivism. If you don’t know these guys at all, instead consider Noam Chomsky. While his views are in some sense directly opposite to theirs, he also tries to undetermine conventional thought. Or we can consider Rachel Maddow and Bill O’Reilly, who undetermine “the Democrats” or “the Republicans” at a 4th grade level.

If you listen to these guys enough their model of the world starts to settle into your brain. At first there will be some skepticism, but then there is both the allure of hating the out-group and receiving selective information. It seems that we always want someone else to filter-information and categorize enemies for us. You know GovTrack.US exists, right? You know exists. You know the 10K filings for corporations exist. Youtube and liveleak footage of combat from around the world is constantly uploaded. Internet archives contain all congressional testimonies, old magazines, old books, old newspapers, once you start digging you find some weird things. You can gorge yourself on primary sources until you’re sick.

Superficially they appear similar, both involve receiving information and augmenting your model of the world. . But actual research is not as fun as having someone else filter that information for you, reassure you they are on your side, and point out the flaws in your enemies.

Enemies! They lurk in the shadows. In the outgroup. Your mushy computer was optimized for tribal warfare. Yet your null hypothesis is probably that your vision of the world is clear, not that it’s completely biased towards a tribal enemy view of the world. Admit it, you get a rush when you share John Oliver’s epic takedown of Trump. You’re not learning, it’s fun, it’s primal. When alt-right bloggers scream “Why is no one noticing that Muslim immigration is literally going to ruin Western Civilization” it’s the same thing. Of course, they might be right, but the challenge is for them to convince us that it’s not just a warped view of the world. That’s always the hurdle.

Simulation Thinking:

A fruitful place to start in answering these questions is hypothesizing how we could solve them if we had access to computers, information, and models, that don’t exist, but could theoretically exist. Most of our attempts to answer these high-dimensional questions with causal reasoning comes from what I think of as running simulations in our brain.

For example, despite vast differences and historical differences between countries like Russia, Iran, Venezuela, Saudi Arabia, and other oil nations, the ability for a government to gain money from oil, bypassing a tax paying base, has a clear pattern in allowing leaders to rely less on pressures of their populace. Trying to then understand how this interacts with specific parties and historical events then becomes messy, and often relies on our simulating different outcomes to try and estimate what seems most plausible given our knowledge of human preferences.

This might end up looking something like this: Russian journalists critical of the regime have a way of dying. Russian political history is a monstrosity, but one thing is that we know Russia gets oil revenue, and based on our economic analysis and observation of other countries who get oil revenue we observe that this means the leaders rely less on the citizens for tax. Now with our economic understanding, context specific details, and knowledge of past events, we simulate what the leaders would do based on our knowledge of the situation and their preferences. We can test aspects of this by trying to isolate abstractions and test those predictions in the future (i.e. how oil revenues for a state correlate to dead journalists). However, we will never truly be able to experimentally test the unique constellation of events that happened in Russia at those points in time. At least not unless the universe repeats.

We don’t give up though, we take it a step further. Are there abstractions related to past communism? To Putin’s personality traits? You get the idea. We then try to consider these abstractions in our brain based on our observations of similar events, as well as our own biological knowledge of human preferences and desires. We dump all this knowledge into our brain simulation. We know strong leaders are often paranoid. We know leaders often do not want their schemes to be exposed. We know economic aspects of Russia’s economy and so on. This makes sense to us because we can imagine being the journalist and wanting to uncover these events. We can imagine being a leader and wanting power.

We can computationally test bits and pieces of these problems by pulling economic data and modelling it, but these models are embedded within our brains contextual simulation. Ideally we would simulate this using a machine-learning model and a computer. Ideally this model would be a better version of the human brain. It would have the ability to make abstract connections, but with numerical precision where possible, as well as the ability to contain all information related to the question in its memory. It would then spot patterns within this matrix of information. It would do this by creating a dataframe that consists of every single piece of information

For the purpose of an example let’s assume a numerical mapping of country political and economic information to a high-dimensional matrix. Now let’s imagine that before certain political outcomes there is often a pattern of 1011001. The model would run a simulation that reads all the data on a few countries, and then uses that information to try and iteratively predict a new country in the test set one time-step ahead. For example, it might start on all data up until the 1990s. Then it starts getting readings on Hugo Chavez’s Venezuela. The training set notices the 1011001 pattern and starts generating political predictions. You’ll note that this is essentially what we do now with our very imperfect brain simulations. In fact, this only seems obvious because oil is obvious. How many other phenomena are as seemingly obvious as oil that we miss, because our brains suck at storing lots of information and considering interactions with more than a small handful of causal variables? 

Our brain simulations make another consistent mistake as well. They are able to make associations with everything, there is no imposed mathematical structure on the forms our associations have to take (i.e. highly non-linear), and we observe all the information at once. The outcomes and what came before. Without an imposed structure, this lets our brains fill in the gaps and assign attribution. This is in comparison to prediction, which imposes structure on our world. If you understand a phenomena, you need to be able to predict it, and any prediction you make today can’t be biased by already having observed the outcome.

Even workhorse regression models in the sciences frequently ignore this error. When you fit a model over an entire dataset, the model implicitly knows and is fitting everything that happened at once. This means that if you are fitting a model of data that has occurred over time, your parameters are estimated as though they observed everything at once. Yet this type of regression is endemic.

Few of us make predictions with clear criteria and evaluate their success. Every day pundits and anyone who writes on the world make thousands of predictions, yet every day we don’t see the after-the-fact evaluations of these predictions. How much better are they than tarot cards? Probably a little better, but a lot better? Well I don’t know, because they rarely, or selectively, keep track.

Part Six

So far we have gone over different classes of models, and flawed model reasoning. This is different from a biased world view, which focuses more on prior beliefs and data. Separating these two is useful, but again they are not truly distinct phenomena. For example, your model of the world might determine where you gather data, which then feeds back into your model. If we do assume though that first we gather data, and next we analyse that data, we can have some success in classifying differences in breakdowns of reasoning..

When people talk about ‘recognizing their bias’ it’s usually just a platitude when someone feels like there might be some reason others expect them to be wrong, even though they’re still sure they are right. What exactly is a bias? Jesse Hughes of the band Eagles of Death Metal–the band playing during the Paris terrorist attack–believes Islamic ideology is going to ruin France and Europe. I think we can both agree that it would be reasonable to call his opinion biased. But again, what exactly does this mean?

Bias means we disagree with their prior. It means that due to their experiences, we are going to discount their model of the world. Specifically, we are making the claim that their inference on the world is wrong because their experiences aren’t representative of what we believe are the expected or average experiences. If our brains were like my R code, we could exclude that data and re-run the model. The human brain has essentially a poor ability to exclude data and re-run analysis. Instead, if we really try, we get a flag that says “past data suggests your experiences would cause you to be biased.” I’m looking out my window right now at the Seattle skyline. If I saw a foreign terrorist crash a plane into a building in my city, there is a really good chance the output of my model on immigration would change. When the same thing happens but I don’t see it, the effect is less. That isn’t scientific reasoning, the fact that this is true should be cause for you to seriously question your convictions. More than you do now. Still, a little more.

This guy called Aumann created an agreement theorem, which argued two people who use a Bayesian methodology with common knowledge must agree on everything. The math on agreement theory is interesting, by playing around with some simple models where people are trying to determine some distribution with different prior information they can come to some neat conclusions. When varying priors or ambiguity is introduced to the model it is no longer possible to agree. We see this in practice, we tend to take on the priors of our professors. They might be the right priors, they might not be, but it’s no surprise that our view of the world is often in line with those we learn from.

Let’s take an open and already defined question off of the shelf and break it down. Right now immigration of Muslims from developing or high-conflict countries is the most emotionally charged topic in the US and EU. Small differences in predicted estimates can blow up in the limit. Let’s consider two outcomes at an extremely high level of abstraction (e.g. Muslim immigrants from Syria are far different from North Africa, and any serious policy debate must consider this). We will also assume we have a shared model of the world that wants to improve the quality of life for everyone in our country, and also help others when it doesn’t interact with the first point:
Outcome one is that over time a 100% of the immigrant Muslim population will assimilate and improve and contribute to our modern day Western culture and Protestant values at a net rate of 0.30% per year. The other group disagrees and thinks that while perhaps 95% will contribute, 5% will strongly detract, with a net benefit of -0.20%. A small variation in prior beliefs can explode the estimate.

In the world of mushy-computers coming to different estimates based on problems with huge variance, it’s just an issue of a slight difference in our estimated parameters. We use moral philosophy to codify lessons we’ve picked up from observing parameter estimates and their outcomes, and to try and build a comforting structure of right and wrong.

Small differences in parameter estimates can result in profound suffering.  From Iraqi sanctions in the 1990s by Madeline Albright, to George Bush’s invasion of Iraq, to the Obama administration in Syria and Clinton’s willingness to inject more entropy in the middle east, millions of people have died. We can’t observe the counterfactual, maybe no matter what we did millions were going to die. Saddam Hussein and Assad are horrible people. Would Assad have quickly ended the rebellion and regained control if Saudi Arabia and the US hadn’t armed ‘moderates’? I take that question seriously, since it’s really easy to write retrospective polemics that pick out 10 seconds of video footage and death count footnotes to make a point. Clinton’s emails encouraging the U.S. to overthrow Assad are funny though. It reads like what you would expect from a lawyer who has spent her entire life at worst politically scheming, and at best fighting for domestic issues. In another way, it reads like what you might expect a slightly out of touch CEO to write.

Our foreign policy has been that democracy and freedom are worth war, dictators are bad, and when we implant the idea they win in the long-run, and sovereign leaders who do not embrace this are betraying their citizens. Maybe Hillary Clinton and Madeleine Albright were right, and their tough choices that indirectly killed millions were for the greater good, and a Muslim ban is unequivocally evil, but you can forgive me for not being convinced. I don’t see evil here, I see a broken idea of what Democracy can promise a country, and our feverish attempt to apply it over and over. At least during the Cold War there was an existential fear that the other side was going to take over the world.

The right choice is to take a page out of Karl Popper’s book, and set up clear criteria for success and failure and run experiments when possible. The experiments of Muslim immigrants have been run for a few decades now, and still there is incredible disagreement — what gives? One reason could be all our data is from observational experiments, and if we had run a few natural experiments there would have been different outcomes. That probably wouldn’t gain widespread agreement. Even my asking this question is frowned upon. As it implies the answer isn’t obvious. And anyone who doesn’t see the obviousness of it must be biased. It also implies the question is valid. Perhaps Islam isn’t causal, but rather it is highly correlated with another set of variables that are causal. I don’t know, I’m not pretending to know.

This post isn’t about US foreign policy or Muslim immigration, it’s about our broken reasoning systems. Each side has its experts. Most people don’t know that much, but join a side. I don’t know that much about Muslim immigration. I do know that clusters of poor non-assimilating immigrants from Muslim countries now exist in major European cities, which is correlated with negative outcomes. I also know that certain classes of Muslim immigrants from highly educated, stable, and cosmopolitan cities are model citizens of the highest order. I know this implies that there are a set of variables that interact with religion to result in a complex outcome that lacks clear ways to attribute the causal drivers.

If you’re a scholar in the field you’ve probably spent tens of thousands of hours learning and reading on this topic. The problem is that if someone else has spent a similar amount of time learning, and is coming to a different conclusion, it’s hard enough to try and reconcile these differences.

However, if you believe your model of the world is correct, you need power to implement it. Whether you want to implement by teaching as a professor at Harvard, writing polemic blogs loosely related to economics for the NYtimes, working at a think tank in DC, working at the state department, or as a politician. There are many ways you can obtain power, but under all circumstances you need others to agree with your model, and look to you for more guidance. Or if you only want power, you find the existing models you think are most likely to catch on, and join that team. This isn’t an academic exercise though, it requires people to actually go out and campaign and build support. But if it did want to simplify it what would it look like?

You would want people to perceive your model as accurate, which is much different than to scientifically convince them your model is accurate. This can be done knowing that humans dislike uncertainty in high-dimensional causal spaces, and desperately want the world to made clear. More importantly, it’s humans want to belong to a side. These are models programmed into our brain at birth to prepare us for survival in a tribal world.

It is in the interests of each group to view themselves as distinct and different. There are so many great theories on rising polarization. Another possibility though is that in our relatively new country and form of government, the knowledge of how to best win an election has slowly built on itself. If I went back to the 1960s using my laptop and what are now relatively simple financial econometric models, I could have made millions of dollars. These improved strategies don’t happen in closed rooms with new research on psychology, they are small changes that are slowly filtered from decades of experiments in the 20th century.

And this is what I’m left with: Firstly, this post was too long. If I ever want anyone to read anything I write I need to be more succinct and not bite off more than I can chew. Second, Humans try to filter high-dimensional problems using our collective brain power, but we also form in-groups based on shared values, which can be thought of as prior views of the world previously filtered from past generations. We are an extended computational system interacting with an overly complex world.


Empathy as a Modelling Technique


I am not a progressive, but most of my friends are progressives. That’s cool. My disagreements with progressivism are nerdy and have to do with their conception of historiography. If you consider yourself a progressive and spend your time or money to help people who are suffering that’s good. If you spend your time trying to gain more political power and expressing outrage and shaming people on the internet, well, that’s something you can do.

As with any political movement, it helps to know the ways different individuals perceive their interaction with other people and the economy. This tends to be pretty hard and requires radically debasing yourself from your own experiences. It’s like the scientific version of getting baked and staring out your window trying to look at the world in some new way you’ve never thought of before. It doesn’t even require you to change your own views of the world, although it might. If you still want to advocate for politicians or policies, understanding your opponent can only help. If your opponent mistrusts and hates you, it helps to know why.

If you want to dismiss your opponents as some combination of strictly less intelligent than you, sorely misguided, or evil, that’s also something you can do. It’s what the New York Times editorial team does. I don’t think that’s the case, so I’ll try to explain why. Part of this explanation requires taking seriously experiences or ideas that are ignored on the basis of being hateful or racist. That doesn’t mean I’m hateful or racist. Please remember that difference. On the other hand I’m not going to avoid uncomfortable arguments, and I’m not going to turn them into sanitized strawmen either (which I imagine to be a scarecrow dipped in rubbing alcohol that we set on fire).

Given the enormous complexity of the world, despite my sticking to relatively scientific arguments, for all we know they are individually robust but in totality represent a biased view of the world. That’s actually okay. My point here even as I try my best to be methodological isn’t that I’m right. It’s that a reasonable person could construct a view of the world in this way such that it has the same claim to validity as any other construction. We are getting useful observations of the world, it’s just that we have too many degrees of freedom.

Where to start? There are three books that are important to understand. Karl Popper’s The Poverty of Historicism, Butterfield’s Whig History, and Stovers essay. That’s where I should start, but I’m going to save that for the next post.

Instead let’s start with alt-right provocations. The alt-right might not be right, but they have a remarkably consistent model of the world. We’ll start with Milo Yiannapolis because he’s hilarious, but if you’re looking at him to understand why folks like Trump and Farage are gaining some power then you’re only going to become upset. Milo wants to post stupid memes to provoke you and rile up the future of the party and his career. His most intellectual post was a brief history of the alt-right, which is a fine article, but he’s no heavyweight.

Then there is Scott Alexanders anti-reactionary FAQ, which reads like a parent telling their kids “Ecstasy is really fun, but the risks don’t outweigh the rewards –which let me tell you–will blow your fucking MIND.” That was a brave post for him to write. You notice how he calls it anti-reactionary in the title? He also includes tons of trigger warnings. He’s essentially admitted himself he uses these rhetorical techniques to shield himself from accusations of racism by using tribal code-words to signal that he’s part of their group. He does this well, everything thinks Scott Alexander is on their side. Could anyone else be welcomed to a bar by both progressive intellectuals and neo-fascists? He deserves every ounce of his fame.

Could… could neo-fascists and progressives find common ground? I am certain they could if they actually tried. In the far reaches of the alt-right movement we have Taylor Swift for Fascist Europe. It’s strangely hilarious, but they are only half-joking either. It’s sardonic, it’s irreverent, it’s a ‘fuck you.’ When a cultural movement is built around criticizing and deconstructing whiteness, with modern academics writing ‘research papers’ on popstars, is it any a surprise that, as Moldbug says, “clever 19-year-olds discover that insulting it is now the funniest fucking thing in the world?”

Why is the alt-right rising? The reactosphere of neoreactionary, antisemetic, and anti-progressive internet denizens is growing quickly. Trump tapped into it, but it was growing before him in lockstep with the online social justice movement, which is fashionable. If you want to know what the fashionable position is ask yourself “If I posted this particular political meme on my facebook am I more likely to get likes? Or lose all my friends?” What’s peculiar is the alt-right has formed a one-sided alliance with the unfashionable white lower middle working class. I say one-sided because I don’t think the white lower middle class knows the alliance exists.

Why would you lose your friends though? Is it because your ideas are unfashionable? Or because they are simple and hateful, and we (rightly) don’t associate with hateful people? Let’s try an experiment. I’m going to go over an experience I’ve had, which you might find uncomfortable to read, but I’m also not going to say anything hateful. I will then explain why it does make you uncomfortable, but why it shouldn’t. A large part of our goal here is to help you understand the perceptions of people you find racist in a way that will light up the empathy neurons in your brain.

When I lived in London I was in a neighborhood called Elephant and Castle. It is now home to a large Nigerian immigrant population, as well as ethnic Brits (which if you read Hume’s Volume 1 on England — which you should– means a mess of barbarian, pagan, and Roman populations — whatever). We had to be careful late at night due to muggings, which were perceived to be mostly from the poorer Nigerians. They looked scary, because we knew they drove the crime statistics. I won’t lie and say I wasn’t on edge because I’m not a liar. Every night around midnight, more so on weekends, a group of 10-30 Nigerian men and teenagers would get drunk outside my first floor dorm. They would yell and fight for hours in the street. Sometimes there would be screams from the fighting. Once they stared through my flatmates window at 2am and started yelling, until she came to my room asking to stay there because she was scared. Another time I heard someone get severely hurt, I rushed to the main entrance and told the dorm supervisor, who himself was a kindly Nigerian immigrant, that he ought to call the police. He said that wasn’t a good idea, you don’t want to get police involved around here. Just ignore it. Violence in the streets of London is the natural state of order, we would only cause more harm by interfering. I think he made the right call, but I was naive at the time.

What is interesting is this experience of mine has zero model of the world embedded. It’s a recounting of a personal experience I had, nothing more. If my blog had more than three readers there is a good chance some would feel uncomfortable. Uncomfortable might be the wrong word, but we can agree that in good cosmopolitan company this wouldn’t be an appropriate story to tell. The problem is we impose models of the world on people based on what they say. The type of person who would talk about the downsides of immigration or outcomes associated with a nationality or race is a bad type of person. Why? Because that type of person is correlated with the type of person who historically did bad things.

I call it the Hitler and Slave-owner experiment. If you walk into a bar with Hitler and a Southern Antebellum slave owner and tell your story, would they give you a nod and sip their beer? If so it’s implied your experience validates their model of the world. Is that a bad thing? Well, it’s not a good thing. But is it a bad thing? For example, we could have another model that embeds experiences that validate their model, but also include other experiences and use a function that doesn’t at all imply anything horrific.

Anyway, for those of us living in London we were thankfully at a great school. In fact, our school was founded by the great Soviet loving progressives of the time, which struck me as bitterly ironic as LSE is famous now as an investment banking school. As a result most of us made enough money to move to nicer neighborhoods after graduation, plus, it’s not as though we were raising a family in that area. For a year, unless you were like one of my friends that got mugged, we can just call it unpleasant.


The problem is some British families have lived in those neighborhoods for decades, and they remember back when the neighborhoods were safer and culturally homogenous. As far as safer goes, it’s based on the communities perception of security and safety. Should a community have a better knowledge of its safety than official statistics?

Official statistics are aggregate numbers broken down by predetermined dimensions. You have to have a hunch as to what data to track, how to track it, and how to segment it, before you actually go ahead and structure data collection.

Even then, it’s so easy to lie with statistics as to make them rarely more useful than the filtered experiences of a community. A community lacks that formality and attribution, and instead they approximate reality using a distributed process to filter out information, like around what housing developments does crime happen, what alley is most dangerous, and what do the criminals tend to look like.

The data is more granular and there are more data points, it’s less official and it’s less rigorous. A trade-off. It’s like a mix between the bias-variance tradeoff of modern day machine learning and Hayek’s point on knowledge in society. The model has lower variance because it can fit the data better at a granular level. Unfortunately, it might have more bias. But do the official statistics not have a bias? What if all groups can’t help but embed information from their model of how the world ought to be in their objectivity?

Consider the Rotherham child sexual exploitation scandal Click through them and read the names of those who committed the acts. There was initially a cover up, due to fear that reporting this information in the official records would inflame racism. Yeah, it probably would have. The common people have a more granular understanding of the data points regarding how other cultures interact among their communities. Of course, there is also a strong reason to believe we dislike someone from the outgroup attacking us far more than we care about the same attack from our own ingroup.

Let’s look at Japan for a second. There are huge protests because an American killed a Japanese women. There isn’t a great reason to believe Americans do this more than ethnic Japanese. We’re not part of their identity though, we’re an outsider. When outsiders attack your brain screams “fuck them fuck them fuck them.” It’s probably an evolutionary thing.

So here is the question you have to answer, why is this outgroup bias considered wrong? Let’s take the null hypothesis to be that child sexual exploitation is not correlated with ethnicity or immigrant status. We don’t know if it is or isn’t, it’s not as though the British aristocracy has a stellar record of not raping kids. If they don’t know that they are wrong, should they? Who is keeping track of this stuff in a way that’s sufficiently rigorous to be free from ingroup and statistical bias? And if it can be proven that per capita pedophelia is the same, can we impose that they are being irrational? Or is it legitimate to be angrier if someone from outside your identity commits a crime to someone within your identity? That’s the history of human conflict, isn’t it?

The point of this post, this blog, and really my entire point on the philosophy of science, is that our methods of filtering out signals from reality are broken and weird. It doesn’t take a sophisticated philosophy of science though to ask for the counterfactual. After all, if per capita pedophelia sex gangs are equally distributed across immigrants and non-immigrants then do we have a problem? Well obviously we have a problem, but you know what I mean.


The populist xenophobes of our Western world miss the dignity of work and the safety of cultural homogeneity. The problem is when you explain these people’s beliefs and actions due to racism and xenophobia, what you’re really saying is “fuck you.” You’re attributing their behavior due to some degenerate moral condition or uneducation. If only they were as smart and educated as us they would appreciate their more violent neighborhoods and overcrowded hospitals.

Not to mention their desire to prefer living among their own traditional culture is viewed as a deeply racist preferences not worthy of our respect. My background is culturally and ethnically diverse, but when you ask me to tell you–to prove to you–why it is so deeply and obviously wrong for towns and cultures to prefer homogeneity I can’t come up with an answer. Who am I to tell them what they are right or wrong to want? Maybe if I had religious beliefs it would be obvious, but I have none.

If I started to build and speculate on an optimal societal structure, and I really shouldn’t, but if I did I might think it’s okay to prefer homogenous cultures, religions and values. So long as they welcome any person who is able to seamlessly integrate as well as contribute economically to this state of the world. If outsiders come in and do not assimilate to your values, often holding values that run counter to yours, and also damaged your economic standing, is it not natural to build resentment?

I’m generally of the belief that individuals in aggregate are better at understanding their circumstances than academics. Let’s see if I can cherry-pick any financial reports or research that back up this claim:

The costs associated with unauthorized immigrants immigrants are mainly concentrated in three areas: K-12 education, emergency medical care and incarceration, estimated by the researchers’ at approximately $116.6 million per year.

Fairley, Elena and Rich Jones. Colorado’s Undocumented Immigrants: What they pay, what they cost in taxes, The Bell Policy Center, April, 2011.

The overall taxes unauthorized immigrants pay into the system is greater than the amount of benefits they receive. However, many states and local public entities experience a net deficit because the costs of certain public services (education, health care, law enforcement, etc.) exceed the tax revenues they collect from unauthorized immigrant workers.

Coffey, Sarah Beth. Undocumented Immigrants in Georgia: Tax Contribution and Fiscal Concerns. The Georgia Budget and Policy Institute. January 2006.

It is estimated that the total revenue contribution, including state revenues and school property tax, from unauthorized immigrants was $1.58 billion. The total estimated cost of unauthorized immigrants, including education, health care, and incarceration, was $1.16 billion leaving the net benefit to the state at $424 million in fiscal year 2005.

Combs, Susan. Undocumented Immigrants in Texas: A Financial Analysis of the Impact to the State Budget and Economy. Texas Comptroller of Public Accounts. December 2006.

I might be a little elitist, because the Texas Comptroller doesn’t strike me as a credible source, and I don’t know anything about the Georgia policy institute. They might not be as smart as real academics, on the other hand they might be less concerned about being fashionable and more interested in just putting out a good report. Conjecture on my end, I have no idea honestly.

We also have David Autor who brings us more credibility on econtalk. He writes about how the benefits of trade with China might have been overstated in so far as they benefit our entire country. It seems rather that some of our country keeps losing jobs with no replacement. Huh.

Our point here is to show that it’s entirely feasible for communities to suffer due to immigration on purely economic terms. You notice in many of these scenarios they pay in more than they take, but not in such a way that compensates the direct public services they use? Their taxes go to the general economy, and the work they do provides value to the aggregate consumer surplus and shareholders of a firm.

The school the townies take their kids to, and the emergency rooms they rely on, are overcrowded and overused. Their wages are depressed, which we usually measure as a positive thing for the economy in aggregate, but obviously sucks for them. Increased drug and gang violence seeps in, which is largely due to failed drug war policies from the federal government. But manifests itself as drug addicted communities and crowded jails.

And while we are lucky that our Christian culture has heavy overlap with Mexicans, it doesn’t with Muslim immigrants. The Banlieues in Paris is a Muslim ghetto, “The kids in the banlieues live in this perpetual present of weed, girls, gangsters, Islam.” Maybe things would be different in the U.S. with massive immigration. Maybe they wouldn’t. What are the benefits for the lower middle class culturally homogenous Christian communities that are already suffering for meaningful jobs and enjoy their townie values?

I’m trying to help you see the world from their eyes. If you view them as the uneducated proles or unwashed masses you’re not seeing it from their perspective. And if you aren’t willing to empathize with them then you aren’t willing to understand them. You’re giving them the middle finger. Empathizing and understanding the world from the perspective of people you disagree with is incredibly challenging.

Keynes did a pretty good job. Still, it took decades before it became common knowledge that the post-WWI punishments might have created the nazis. I’m sure a few smart guys spotted it in that time, but were probably afraid to say it.

After all, if Trump supporters and Brexit voters are simply evil and uneducated, then it’s hopeless. Because you’re not evil, and you are educated. So when you try to imagine it you just imagine “What if I had an irrational hatred of non-whites” than you think “thank God I don’t, could one imagine being so base?”

The problem is what if it is explained by factors you don’t appreciate? If an old British lady in Elephant and Castle said “I miss the days before these Nigerian immigrants” and you launch into a 5 minute speech on her misunderstanding of the world, I think you’re the one who doesn’t get it, because it’s really simple. In the past she wasn’t scared when everyone shared her culture and were part of her in-group. Now she is scared and hears people who are foreign to her fighting and getting drunk outside her home. This may or may not be true, but it’s how she perceives the world. And when she explains to you her perceptions, you call her a xenophobe.


Imagine the US shares a 4-dimensional hyperplane with intelliglandia. We just discovered this border last year, and it’s a country where the average person has an IQ 3 standard deviations above our own. At first they just visit us for tourism, but over time some of their less educated realize they can live relatively incredible lives in our country. At first they start coming to our best schools, soon our top 30 schools are 70% intellipeople. They then start getting jobs in high-finance and tech.

They then explain, look, there is this thing called the lump-labor fallacy, and we are providing incredible new value to your economy. I understand your generation was aspiring to breaking research, working at top tech firms, and working at hedge funds, but maybe it’s time to reskill and realize these aren’t the best jobs for you. I know your father, and his father before you, worked on computer science research and wrote code for the most elite firms, but there is a great accounts payable job at the QFC headquarters that you’re more suited for. I know it’s going to seem like you’re making 60% less money than your parents. But because society as a whole is so much richer due to the incredible value we add to society you’re actually going to enjoy a better quality of life. You’re welcome.

Most of my friends would be upset. Because for most of us it’s not the money or the cause-to-do-good that drives us. It’s the social praise. It’s being respected and appreciated by our family, our peers, and most of all strangers. The feeling of knowing others admire and respect you for your hard work and intellect is one of the strongest drivers. If you gut that, you gut the person’s drive for accomplishment.

If you want to understand the other side of America, or the UK, the Trump supporters or townies, you need to understand their feelings of social loss. They feel that they have lose not only their respectable work, but their desire for social accomplishment. The reason is complex, with some parts being hard to attribute.

The creative destruction of industry is nearly impossible to predict. Even if this could be delayed by 5 or 10 years, it seems inevitable. Still, it’s a smoking gun of why jobs are lost and not replaced. The next reason is immigration. When immigrants come in and push your wages down, that really sucks. Imagine you are doing research at your university, or coding for Microsoft, or doing analytics for a hedge fund, and a team of intellipeople come in and start doing the same job equal or better. Your boss goes “Look, you do great work here, but we’re going to pay you $70k a year now, down from your $150k, because you’re no longer as valuable.” Or the ambitious paper you submitted to a top journal was rejected because an intellipreson did the same thing but spotted a few biases you missed. Aren’t you happy? This is creative destruction, the economy is growing due to more value being added,

Nothing I wrote proves anything, it scratches the surface. You can believe it’s all false, or at least core parts of it are false, but you should at least understand this is the perception of those who voted leave, or vote for Trump. This is how they see the world. As far as I’m concerned it’s a reasonable way to see it from their perspective, and they deserve to be taken seriously. When you dismiss them as being uneducated, or call them bigots, you’re saying “fuck you.” Why be surprised when they give you the finger back? If you truly are the morally just one, then it might be that these other people do feel the same hot-blooded righteousness and outrage that you feel, it’s just that they are actually the wrong ones. That’s a position you can take if you want.

Progressive Estimation Techniques

Modern conservative and progressive thought–mainly progressive thought as it controls the academy–reminds me of a biased algorithm trying to estimate a model with tons of parameters. You need an algorithm that keeps trying parameter numbers, evaluating how well the model fits, and trying new ones. When I did economic research to estimate these models I would watch the four series, which were filtered from the parameters, as they would drastically change depending on the numbers chosen. In 100 dimensional space there are many areas in the parameter space your model can’t explore. You’re never sure if you have converged on a true estimate, or your model took a wrong turn on one parameter a few thousand iterations ago. If your algorithm gets stuck at a local maxima, it needs to take a huge guess in the right direction to escape. Our modern political thought–unlike all previous times and countries–might not be at a local maxima but is in fact progressing to the global maxima of a utopian society. If so, I hope to help prove that claim by exploring the most unfashionable arguments for you in order to help you build your confidence that we are approaching that global maximum. I do this all in the name of progress.

We live in a world of more than 100 parameters. If the number of parameters that govern our interactions with the world can even be quantified in a meaningful sense. All the same, whether it’s a true comparison or an analogy–no way to tell–it’s a useful way of envisioning the world. After all, our best scientific discoveries tend to arise from mathematical and logical models with parameters, so it’s safe to assume the same logic applies to reality as a whole. Let’s make some assumptions and impose structure on what we want to call our model of reality, and state that each parameter represents a clear factor that makes sense to the human brain. We recently had a mass shooting (I’m sure this sentence will still hold true even if you read this post a year from when I posted it). In this case the primary parameters would be gun legislation, Islam, immigration, homophobia, and so forth. You’ll notice this is a little shaky, since homophobia and Islam might have a distinct relationship, but let’s just stick with it for now. What if we are missing parameters, and that these parameters have high-dimensional nonlinear interactions? If these are emergent properties from complex interactions every nice little argument would be dead wrong. Shorter arguments can be perceived as more persuasive. And the memetics of nice arguments can interact with the structures of our government.

In the social sciences our goal is to explore the parameter space. Sure we do this with math and models, but it all starts with our brain’s learning algorithms. We try to filter out areas worthy of study in our n-dimensional reality. Political systems tend to be useful areas to study, as they explain how humans interact. The Mariana trench is not useful in explaining how humans interact. This is obvious, but it’s also not obvious. When you try to teach a computer the difference between obvious things, it often ends up being way way harder than you might expect. What, exactly, is our form of government? It’s a classification issue. There is no such thing, literally, as a Democracy (despite what the tombs of old political theorists tied to prove). It’s a word we assign when an extraordinary amount of attributes are satisfied falling into some boundaries. The definition changes over time, the attributes we consider change over time. A measure of the electoral connection seems to be one of the most salient features filtered out by political scientists. When we pretend we have firm definitions we are lying to ourselves. Our Democratic experiment continues, perhaps the defining attributes are yet to arise.

On Democracy, what exactly is a political centrist? As Soviet historian Martin Malia surveys academic thought in the introduction to his book, he writes that “The Soviet Union portrayed by Western social science represented a variant of ‘modernity,’ rough-hewn no doubt, yet in significant measure a success. Most specialists agreed, further, that the system was ‘stable.’” Malia continues to point out popular academic thought that the rise of Bolshevism was “Democratic” and tolerated “diverse views.” But those are boring examples. Let’s try some more fashionable conservative click-bait. If you’re not aware of this strain of communist apologists you might be a little shocked. More likely you have an idea that there are some of these people that exist, and take it for granted. Anyway, the red scare is so 1964. What we should really be concerned about is too many white men in our curriculum. That’s a topical link for me since I’m from Seattle, but it is commonplace.

There are enough blogs that highlight the absurd standards on what issues academics are (or aren’t) morally judged on. The question that interests me is how many times has this fight been lost already? We have entire departments devoted to all forms (except white) of ethnic nationalism, gender identity, and activism. The goals of these departments are to build a cohesive worldview that will build a better society, and proposes the core assumptions and worldviews that we must hold and enact to approach this optimal society. This blog focuses on only one question: Does this work as intended? Not whether it sounds nice, because of course it sounds nice. Plus, who doesn’t want to battle evil in pursuit of justice? Battle is fun. Unfortunately for the battle ready, Karl Popper, the most famous Philosopher of Science of the 20th century, doesn’t seem to think these battles should exist. In The Poverty of Historicism from 1957 he notes:

“i) Unintended consequences: the implementation of Historicist programs such as Marxism often means a fundamental change to society. Due to the complexity of social interaction this results in lots of unintended consequences (i.e. it tends not to work properly). Equally it becomes impossible to tease out the cause of any given effect so nothing is learnt from the experiment / revolution.[20]

ii) Lack of information: large scale social experiments cannot increase our knowledge of the social process because as power is centralised to enable theories to put into practice, dissent must be repressed, and so it is harder and harder to find out what people really think, and so whether the utopian experiment is working properly. This assumes that a dictator in such a position could be benevolent and not corrupted by the accumulation of power, which may be doubted.”

Do you think Karl Popper is assigned in many modern activist departments? I might be getting ahead of myself – a few quotes on soviet era scholarship, a link to some radical student activists, and a claim that Popper isn’t assigned shouldn’t be sufficient to convince anyone that activist departments do exist, much less that they are more about gaining power than scientific discovery. I will try to convince you of this as carefully and scientifically as possible. Or convincing myself that I’m wrong. Or that I’m in some foggy no man’s land of pure noise. I don’t know. I think I’m right, but many wrong people have thought they are right, so I’ll exercise some humility.

Still, there are reasons to be concerned. Certain questions are off-limits and cannot be discussed by any professor at an elite institution with a dream of tenure, power, or influence. They are left to a few brave academics and smart bloggers. The problem isn’t that these guys are right (those two links are on human biodiversity). It’s that they cannot be spoken. Try it, next time you meet up with your most fashionable friends bring up the arguments in The 10,000 Year explosion. Don’t assert that they could be right, or hint to it. Actually, don’t, because you’ll make everyone uncomfortable and might upset a few people.

Our philosophy of science works very well for the STEM fields, it’s rapidly improving for laboratory and medical experiments, and it’s making progress in some areas of the social sciences. Yet in a world where our supercomputers cannot simulate complex molecular interactions in any amount of reasonable time, with what hope do we have of constructing a story of history that has filtered out the true causal drivers of history, with all pesky interactions controlled. What if some of the views of history are dangerous? If humans are tribal, than evidence supporting a tribe–even if it’s true–might lead to war, genocide, slavery, and a number of horrific outcomes. Would we then need to be shielded from these views? Conspiratorial logic is awful though, there is no conspiracy. No group of academics gathered in a musky view to suppress thought. It makes for good movies, but it’s unrealistic. It would come about from emergent properties.

I remember vividly I was forced to take a course in African American Political Thought. I say forced because I had no choice in the curriculum in the honors program. We read James Baldwin and Frederick Douglas exclusively. Growing a little bored of literature, I once asked my professor in a seminar what his view on Thomas Sowell’s argument was in his book White Rednecks Black Liberals. Maybe Sowell was right, maybe he wasn’t, I didn’t know, I was a student. I was told we weren’t to discuss him because my professor didn’t respect Sowell’s authority. That seemed a little strange. Plus, in what sense is literature science? I don’t think it’s obvious one way or another how to incorporate literature into scientific thought. It is a valid question, but not obvious. There was no conspiracy here, was this just one of those emergent properties in action?

How do we incorporate old books, stories, or the news into our conception of government? We must though, as we all have opinions on these that are a direct result of more than just some textbooks and courses. When I think of the success of capitalism, what goes on in my head? It’s a cacophony of sources, models, and stores that I have read. They fire through the neural network of my synapses and I filter out a reason why capitalism is probably good. All this conditional on there being an already filtered set of attributes that we can consistently classify as capitalism. My experience of the world seems that trying to derive complex systems from first principles doesn’t work. No matter how reasonable the axioms appear. What actually seems to happen is that we filter out key components from empirical observations. If that’s true though, teaching the field from assumptions, as I learned it, doesn’t really make sense.

The rest of my courses focused on a standard liberal education. I was lucky enough to have a professor that introduced me to the great libertarian thinkers of the 20th century. That was independent study, of course. It’s incredibly rare for Hayek or Milton Friedman to be taught in class. Other than the libertarian perspective, there aren’t many academics who disagree with modern progressive thought. Some Economists will have a conservative bend, typically due to economic policy, and there are some unfashionable religious schools that disagree with abortion and perceived debauchery of the left.

At the time the libertarian thinkers seemed to be the dark side. I was never willing to fully commit, in the back of my mind gay marriage, universal healthcare, and legalized marijuana were the defining fights of my generation. I remember vividly the feeling of outrage and hate when I saw the religious right fight against gay marriage. Who do these people think they are? I saw myself as an underdog fighting against an oppressive power. Sure we had the entire academic system, most policy makers, the presidency, the NYtimes, and everyone under age 25 on our side. Of course, that’s not to say some of the points aren’t legitimate. While tough on crime policy can’t be obviously attributed to one evil side, it’s no secret that modern public opinion has the right favoring the death penalty and tough on crime approaches. Plus, depriving women of reproductive health care and preventing gays from marrying is needless and petty. The reason I’m not worried is because the religious right is boring because they are losing. Every time progressives beat them on an issue they push the line forward and start the next battle while lamenting that war never ends. The difference seems to be that progressives will actually define the 21st century. Does anyone really think the American right will?

The problem is that as you obsess over your in-group and fighting the out-group, you slowly form a contorted and twisted version of the world. You’re presented with a picture of reality that states some set of issues are the issues. Your opponents take their stance on the issues. What are the issues? They tend to be the specific policy questions that best split the population into two and can be incorporated into one of two parties. The question we have to ask yourselves is does this group vs. group battle over the issues portray an accurate picture of reality? Or do we get so caught up in our side, our battle, our righteousness, that we completely lose sight of just how complex our world is? And if we do lose sight, who is going to tell us? Where is the guy, detached from any group mentality, reading primary sources from the past and present, that will tap you on the shoulder and say “I think you’ve given in to the hot blooded excitement of tribalism, and have gone slightly off course.”

I’m not convinced what I learned accurately represents reality, or a meaningful history of thought. The academy handed me a set of base assumptions. I started out with half my worldview assumed to be true, without realizing I was at all learning on assumptions. It’s not that what I was taught was necessarily wrong, but after seeing the world of high-fashion in the academy and corruption of scientific thought, I have no reason to trust anything I was taught as an unbiased picture.

Curtis Yarvin started the neoreactionary movement online, which really just amounted to him breathing life into a once renowned, now less popular philosopher, Thomas Carlyle. Is he right? Well, his combined evidence and criticism of modern progressive thought is overwhelming. Is his solution the right one–that Democracy is a broken system? They are interesting questions. Why aren’t they asked in the academy? One reason is that because they are so obviously wrong, like anti-vaccination or numerology, that they serve no purpose. They aren’t obviously wrong to me and at least a few other smart people I know. Maybe we aren’t that bright and everyone else has it figured out, honestly though, I don’t think that’s it.

In our developed countries the inertia required to exercise political violence is large, as it rarely promises rewards. This is due to centuries of institutional architecture. In other countries and times it hasn’t worked as well. Looking back these revolutions, insane political experiments, massacres, famines, and wars, seem wrong. They were brainwashed people, probably evil. Except they didn’t view themselves as evil, they truly believed what they were doing was right and would save their country. Malcolm Muggeridge was a British journalist stationed in the USSR. He met with many of the British liberals who had come to visit the grand Soviet experiment with optimism:

You would be amazed at the gullibility that’s expressed. We foreign journalists in Moscow used to amuse ourselves, as a matter of fact, by competing with one another as to who could wish upon one of these intelligentsia visitors to the USSR the most outrageous fantasy. We would tell them, for instance, that the shortage of milk in Moscow was entirely due to the fact that all milk was given nursing mothers – things like that. If they put it in the articles they subsequently wrote, then you’d score a point. One story I floated myself, for which I received considerable acclaim, was that the huge queues outside food shops came about because the Soviet workers were so ardent in building Socialism that they just wouldn’t rest, and the only way the government could get them to rest for even two or three hours was organizing a queue for them to stand in. I laugh at it all now, but at the time you can imagine what a shock it was to someone like myself, who had been brought up to regard liberal intellectuals as the samurai, the absolute elite, of the human race, to find that they could be taken in by deceptions which a half-witted boy would see through in an instant.

At the time if you were an intellectual liberal in Britain it was expected that you would fawn over the USSR and the great promises of communism. How can we be confident we aren’t falling into the same traps? None of us want to be made fun of in 100 years for being misguided. If you took an incredibly unfashionable argument, something well thought out and not base, and presented it on Facebook how many friends would revel in their disgust for you? Sharing fashionable posts is a great way to signal how smart you are, and historically they have been misguided (as newsletters and pamphlets before Facebook). They aren’t always wrong, but what would it take to instill a seed of doubt in your mind?

On the other hand, the neo-reactionary view stating that progressivism and Democracy is completely broken is outrageous. So there might be a simpler explanation for why we don’t consider these seemingly radical ideas: They are stupid. If we assume a sort of efficient market hypothesis ideas it makes sense that the intellectuals of our past would have already vetted and discarded the areas of parameter space that make no sense. Unfortunately, the existence today of academics who take seriously the mysticism of philosophers like Hegel and the unfalsifiability of Marx doesn’t support that argument. As Stove points out, wherein he quotes Hegel:

“His book is, naturally, full of quotations from Hegel’s early writings. In subject-matter these passages range from the astronomical to the zoological. For the examples which I promised earlier in this essay, I have chosen two of the astronomical ones. First:

In the indifferences of light, the aether has scattered its absolute indifference into a multiplicity; in the blooms of the solar system it has borne its inner Reason and totality out into expansion. But the individualizations of light are dispersed in multiplicity [i.e. the fixed stars], while those which form the orbiting petals of the solar system must behave towards them with rigid individuality [i.e. they have their fixed orbits]. And so the unity of the stars lacks the form of universality, while that of the solar system lacks pure unity, and neither carries in itself the absolute Concept as such.


In the spirit the absolutely simple aether has returned to itself by way of the infinity of the Earth; in the Earth as such this union of the absolute simplicity of aether and infinity exists; it spreads into the universal fluidity, but its spreading fixates itself as singular things; and the numerical unit of singularity, which is the essential characteristic (Bestimmtheit) for the brute becomes itself an ideal factor, a moment. The concept of Spirit, as thus determined, is Consciousness, the concept of the union of the simple with infinity;

Do you know any example of the corruption of thought which is more extreme than these two? Did you even know, until now, that human thought was capable of this degree of corruption?

Yet Hegel grew out of Kant, Fichte, and Schelling, as naturally as Green, Bradley, and all the other later idealists, grew out of him. I mention these historical commonplaces, in case anyone should entertain the groundless hope of writing Hegel off as an isolated freak. But now, remembering those historical facts, while also keeping our eyes firmly on the two passages I have just given, will someone please tell me again that the Logical Positivists were on the wrong track, and that we ought to revere the ‘great thinkers’, and that the human race is not mad?

I agree Stove, why were the Logical Positivists told that they were on the wrong track? Oh, and first, who were the Logical Positivists? We’ve been handed a set of great thinkers, philosophers, scientists, and lessons from our historical past. Combined together they tell a story of how the world unfolded, the best form of government, and the most refined ideas. If we could look backwards and pull out a different set of thought that has been forgotten, but that is equally robust and suggests our current conclusions are completely incorrect, what would that look like?