Filtered Worlds

 

Intro:

If you want to understand modern politics scientifically you have to understand the separate model classes that we use to understand the world, and how our brain fits into it all. Our brains aren’t mystical, they are flawed in understandable ways. Being part of a political party or movement is euphoric. There is the base excitement of belonging and fighting. Even if you think–or are–fighting a just battle, you’re still fighting for power to impose your model of the world into action and law.

The stronger these feelings the more conviction you will develop in your view of the world. You begin to see people on the other side as misguided and wrong, and yourself as smarter for seeing the truth. Your brain is a mushy algorithm that was not designed to understand the world, but to ensure your survival and defeat your enemies. At every turn it will trip and break you.

A reasoned discussion involves understanding another person’s model of the world and the data they are using.  If you don’t know these things you can’t know they are wrong. That’s not as much fun though as trying to systematically dismantle someone’s argument and enjoying the feeling of showing that they are wrong.
Part I.

This difference between the legal scholar and the engineer is recognized but poorly understood. One deals with more complex arguments, the other with math (or whatever). I view this difference as one based on how many dimensions of data we are dealing with, and the algorithms we used to solve the problem. It’s an oversimplification, but a useful one. Our brains run machine learning algorithms, everything we think or do starts by running an algorithmic routine in our brain. On one end of the spectrum, let’s call it the left, we have incredibly high-dimensional fields: such as Legal theory, political theory, and business analytics. On the right we have lower-dimensional fields: such as physics, chemistry, and programming.

Without obsessing over how we transitively list fields of study, this will make sense to you generally. On the far left we have a very challenging time uncovering elegant low-dimensional abstractions to our questions. This means there aren’t obvious logical or mathematical models we can use to solve the questions of legal theory or history. Instead we are stuck with trying to use our brains algorithms to learn and filter out causality from high-dimensional correlation and noise.

On the far right we still start with our brains algorithms (no escaping that), but we try to filter out a clear abstraction that generalizes the problem. No matter what you are trying to code, and in what language you choose, the abstraction of a ‘loop’ remains the same.

This classification of fields of study using high-dimensions ends up being one where we exclusively use our brains. We can still refine our brain algorithms, train them on some sort of general framework (e.g. political theories), and learn and observe more about the world. But in the end the analysis comes from our brain. The fields of study using lower-dimensions get their answers from a mathematical model or computer. Of course, the vast majority of analysis takes place using a combination of these two. This is where we try to map our knowledge of the high-dimensional problems of the world to lower-dimensions, and use math or a computer to study them in ways our brains are incapable.

The purpose of all models of the world is to find signals from within noise in a way that makes sense to our brains. We can imagine our world as being mapped to an n-dimensional matrix with perfect fidelity, such as the main terminal of the simulation of our world, or our world as known by God. As we try to understand the world the first step is that we need to observe and document information we observe in reality. This is done using a combination of neurons in our brain, which form patterns of synapses, and electronic switches that store bits of information on silicon.

Due to evolution it appears our brains are pretty good at scanning the world and deciding what’s worth gathering more information on. For example, only recently has AI beat a computer at GO, which deals with this exact issue of trying to decide what to gain information on when it is intractable to study all possible choices and outcomes.

Within these large amounts of information we can focus on causality or classification. I imagine all classes of models are relationships between information, patterns, and uncertainty, as they would exist in ‘God’s Mainframe.’

The difference between causality and classification has more to do with what makes sense to our brains than a profound difference in a model’s relationship to information. Another way to say this is causal models and classification models seem like they might eventually be the same asymptotically if we assume they progress towards perfectly explaining the universe.  I expect this to be true because all of the causal models we have created were born from the learning algorithms of our brain, so it might be true by construction.

An example is in modern medicine. Physicians often don’t understand the cause, but they are able to place symptoms in a class and match it to a treatment. In this task the physician is using a learning algorithm in their brain. For example, to classify and treat many diseases there is no need to understand the chemical mechanisms. Over an infinite time-frame a learning model that tries to classify symptoms and then tests out a random treatment will eventually perfectly treat all disease. A more sophisticated classification algorithm that we haven’t yet invented would learn at a more granular level the biochemical processes related to a given disease in the human body, and instead of iterating would exploit the structure of the universe to more rapidly converge on an answer. The end result would be a learning algorithm that has ‘learned’ the parameters and mathematical structure that matches the rules of our universe. We consider this understanding the causal reasons.

In this sense causality is often related to a more granular understanding of structural mechanisms rather than simply observing at a higher-level and classifying. It’s a meaningful distinction to our brains. For the purposes of distinguishing between model classes today, splitting them up between causal and classification makes sense. While someday they will converge, at this point they are markedly different. We’ll explore how.

We can view current scientific inference as falling somewhere in this 2-dimensional world. This model of scientific models, like most models, is useful but simplified. High-dimensional (M) causality is mostly beyond current human brain capacity. It also frequently has the most interesting questions. For example: What caused the US Civil War? Not knowing the answers to very important and historically crucial questions is scary, so while our answers to these questions might be pretty good, we place more confidence in them than we should. In God’s mainframe these take the pattern of massive correlations and linkages across large dimensions and over time. The answers to these questions hope to pin down a trend of immense information by finding dimensions that explain lots of what we see. This would be like a principal component analysis of God’s mainframe.

Economists often try to build the answers to these questions with a series of low-dimensional (N) causal arguments. These arguments can be small mathematical models about why socialism doesn’t maximize consumer surplus. Even further down in the bottom left we have physics models. These perfectly explain phenomena in God’s mainframe. They say along some dimension every single time you see 10010101 you will then get 0010.

Classifying low-dimensional models is less common, but we see it to an extent in medicine. You receive a rash, your doctor classifies it and matches it to an ointment. In many cases no one has any clue how or why that rash was caused.

Learning methods are instead concerned with classifying or predicting in a high-dimensional space. This is the class of model that will detect your face in certain images. This isn’t a beautiful set of mathematical equations that detect each feature separately (I don’t even know if such a thing could exist), but it creates a set of ‘neurons,’ essentially binary nodes that are on or off, which when combined in the hundreds can find the pattern of your face. I can’t point to any node and tell you what it’s focusing on any more than I could blindfold you and have you tell me why you are able to detect your nose in a picture (because it looks like how it is). While scientists use the learning algorithms in their brain to search for causality, the aspirational goal is the final result is a detection of causality and separate from the methods used to arrive at that point. This isn’t always how it goes, but it’s the goal.

What’s interesting is machine learning has much more in common with human intuition than traditional models used for causality. You observe the world through your sensory information (most of this comes through your computer screen). This works for extremely high dimensional data and data-types that are very very challenging to compare numerically. Causal studies try to identify clear links between lower-dimensional datasets, with the goal of having higher precision and telling a story that answers why events happened a certain way.

Part Two:

Let’s say you think Bernie Sanders is best. You don’t know much about what makes a president great, after all, you’ve never met any. You don’t know why a friendly demeanor is a good thing for a leader to have, other than it seems like a good thing, and leaders should do good things. You don’t know exactly how or why banking institutions work, but bad things are often associated with them, and usually bad things come from bad people. Or maybe you do know all these things, and you think Sanders is best. I know a lot more than most Sanders supporters on the type of outcome we can expect from his political and economic policies and disagree with them. On the other hand, one of my brightest friends who is writing his dissertation on campaign finance believes voting for Sanders as a one-off event is crucial towards ending this issue. It’s strange then that I have this friend who is very smart, and dedicates all his time to researching this issue, and has determined he’s voting for Sanders. I don’t know nearly as much as him, but I won’t vote for Sanders.

The point is, your brain is learning and classifying, but only impresses upon you the illusion of causal knowledge. Our confidence in our causal knowledge of the world is often not correlated with how much we actually know. In my experience we are awful at approximating what we don’t know.

You are just basing whatever you think on the machine learning algorithms running in your brain. Maybe they are great algorithms due to the genetics, maybe they are great because you’ve trained them at school by having a professor showing you the outcome and result of different cognitive biases. Our brains are great at making certain mistakes, like sunk cost fallacy errors.

While I’ve made distinction between classification and causal thinking, we do switch between them seamlessly. Humans have the ability to just follow their own base algorithm, or try to interact it with knowledge of the scientific method and work on teasing out causality. For example, consider drug policy: There are many medical professionals, practicing lawyers, and justice scholars, who have devoted their life to studying and reading issues involving illegal drugs, studying the evidence of criminal policy, and thinking of effective ways to deal with these issues. Most of these folks have opinions on drug crime law. Most people without any of this knowledge also have opinions on drug crime law: Isn’t that strange?

With almost no evidence, data, or understanding, wouldn’t it make sense to just admit you aren’t informed and scale back the confidence you have in your opinion? But the human brain algorithms always want to spit out an answer, and they didn’t evolve to be calibrated to tell you that you lack information to make a choice. Damnit, choices need to be made. Do we defend from the other tribes attack or run? Do we lock up drug users for life or rehabilitate them? No time to think: We need the answer now! The truth is, all evidence so far suggests complete legalization of all drugs would be optimal. There is at least enough evidence to justify trying it in some trial runs. It’s not happening anytime soon. You might even think this sounds insane, even though you probably don’t know much about the topic. What do you know anyway?

What is even more disturbing is that feeling of hot blood and excitement when your side wins. When a plane crashes, when there is a shooting, when a police kills someone. It’s an open secret that everyone is secretly hoping it justifies the platform of their side. Another datapoint to prove you were right. We build a model of the world, and we grow attached to that model. We eventually share that model with others, which often leads to a political in-group. That in-group wants to be right, most of all it wants power. So it scans the world for data points that justify the model and that will result in power. This is what it is to have and use a human brain. I can’t avoid feeling it, it’s too base, but I can identify and acknowledge it.

If we had to reason why this has happened, it would be that our base algorithm had little evolutionary reason to develop reverence for the complexity of truth. But as a consolation prize our prefrontal cortex has enough wiggle room to let us develop this in ourselves through training, and building upon the work of great humans before us. We can learn this on our own, but we usually don’t start with it.

So we now have established that even without knowledge of the scientific method or evidence our brain algorithms will generate views on the world. This category includes very well educated people in fields they aren’t familiar with, as well as everyone else. Sometime these people are obvious in their obsession with conspiracy theories and faked moon landings. Sometimes it’s the naturophathic healing section at whole foods. Sometimes it’s a true belief in the cause of the Civil War. Sometimes it’s a view with strong conviction on the benefits of rent control. Sometimes it’s a view on the role and benefit of government economic intervention. Fields of immense complexity distilled to simple beliefs held with conviction. Don’t worry though, here at Schools and Thought we aren’t going to throw our hands in the air, like Tolstoy, and give up. Instead we need to figure out a few things: 1.) What can we know? 2.) How can we know when we can’t know something? And 3.) What methods can we use to answer these first two questions?
Part Three:

What if we can’t hack away a causal argument though, and are still stuck in a high-dimensional world with an overwhelming amount of information and association? Using these associations to loosely understand the world isn’t considered scientific knowledge. For example, most professors of social sciences have active political views. Despite this, academic journals in the respectable social sciences rarely ever endorse a politician or controversial policy. These academics instead write about this on Facebook or personal blogposts. Why is this? Because these issues are too undefined and complex to nail into a scientific argument that focuses on a few variables and causal inference.

Instead they require hundreds of variables, their associations, and their interactions, which is something a brilliant well-read human can model in his head and write out, but rarely prove by conventional standards or within a single academic article. I mean, there isn’t even a p-value.

Some social scientists do focus on high information questions, where they study and question the abstractions and how they interact with a certain history of time-period. I wrote my master’s thesis on the political strategy behind the economic research on the Smoot-Hawley tariff. This was a Tariff act in the U.S. in the late 1920s. The economic abstractions were essentially that the Smoot-Hawley tariff was bad because trade restrictions lessen trade, which generate wealth. My argument was that this is true, but the trade restrictions were useful for the Republican party based on a detailed study of the votes in the Senate, political coalitions, and special interest groups. I dove into congressional testimonials, census data, and political pamphlets that other authors likely hadn’t read. There is too much information for anyone to read. Plus, while Smoot-Hawley is interesting, it’s not as though legions of researchers are studying it anymore. It’s not that hard to pick out a few texts no one else read and build an argument off that. Although maybe I’d have come to a different conclusion if, by random chance, I’d selected a different set of congressional records and political papers.

Another example is Winston Churchill and his WWII strategy. It’s established we were the ‘good guys’ in WWII. No matter what we did it was at least an order of magnitude less evil than the calculated murder of the Nazis. Not to mention communist Russia under Stalin, who we condemn (but not as harshly as  Hitler). Although in terms of magnitude of evil it’s unclear whether Russia was any ‘better’ or ‘worse’ than Germany. Although magnitude of death might be the wrong measure, as Primo Levie writes “In this lugubrious comparison between two models of hell, I must also add the fact that one entered the German camps, in general, never to emerge. No outcome but death was foreseen. In the Soviet camps, a possible limit to incarceration always existed. In Stalin’s day many of the “guilty” were given terribly long sentences (as much as 15 or 20 years), but hope of freedom, however faint, remained.”

Churchill personally pushed for bombing campaigns of German residential cities. The goal was to break their spirit to fight by burning them alive. Imagine an alternate world where the allied forces had occupied Dresden and re-purposed a death camp. They then marched tens of thousands of Dresden women, children, elderly, sick, and men, into the camp to be incinerated. Let’s be perfectly clear, that would have been the same core outcome (psychologically different). It’s not as though Britain didn’t have a precedent for this sort of stuff in Iraq in the 1930s. As Philip S. Mumford said, a former British officer in Iraq to left the force, “What is the difference between throwing 500 babies into a fire and throwing fire from aeroplanes on 500 babies? There is none.” Is it okay to incinerate civilians in total war? Sorry, what’s total war again? Were the children of Dresden going to school during the day and gassing Jews in Auschwitz by night? Is Trump the radical for suggesting we go after terrorists families? Or is it a radical idea that we don’t go after their families (we don’t target them now, but we kill a lot of them by accident, which is observationally equivalent).

The conclusion behind these two examples is that there is a lot of information to consider, perhaps too much. Yet most of us hold views of the world despite knowing that if we studied a topic in more depth our view of the world would change. We delude ourselves into thinking we have a well defined and complete enough understanding of the world, history, WWII, and human suffering to write our moral score-card for the 20th century and policy. A score card we all use weekly to evaluate our politicians and government. We trust that the hundreds of Civilians who have died from Obama’s drone strikes were necessary deaths. He’s still our cool fashionable president.

This post isn’t on WWII or drone strikes; we could go far deeper down those rabbit hole. What it’s really about is undetermining our political score-card..

Undeterminism is the concept that another model, which is much different, could generate the same outcomes as the one we are thinking about. This is a link worth reading in full, and the fact that it’s not taught as the base of a university education is tragic (I know everyone says that about their personal interest, but teaching the foundation of science has precedent in my mind).

I’ve been reading Curtis Yarvin and Scott Alexander’s blogs lately. Curtis Yarvin is the neoreactionary of the 21st century. Scott Alexander is an anti-neoreactionary who takes their arguments seriously. This isn’t a post on neoreaction, but neoreaction is a view of the world that profoundly challenges the progressive views on Democracy, colonialism, revolution, fascism, and progressivism. If you don’t know these guys at all, instead consider Noam Chomsky. While his views are in some sense directly opposite to theirs, he also tries to undetermine conventional thought. Or we can consider Rachel Maddow and Bill O’Reilly, who undetermine “the Democrats” or “the Republicans” at a 4th grade level.

If you listen to these guys enough their model of the world starts to settle into your brain. At first there will be some skepticism, but then there is both the allure of hating the out-group and receiving selective information. It seems that we always want someone else to filter-information and categorize enemies for us. You know GovTrack.US exists, right? You know reuters.com exists. You know the 10K filings for corporations exist. Youtube and liveleak footage of combat from around the world is constantly uploaded. Internet archives contain all congressional testimonies, old magazines, old books, old newspapers, once you start digging you find some weird things. You can gorge yourself on primary sources until you’re sick.

Superficially they appear similar, both involve receiving information and augmenting your model of the world. . But actual research is not as fun as having someone else filter that information for you, reassure you they are on your side, and point out the flaws in your enemies.

Enemies! They lurk in the shadows. In the outgroup. Your mushy computer was optimized for tribal warfare. Yet your null hypothesis is probably that your vision of the world is clear, not that it’s completely biased towards a tribal enemy view of the world. Admit it, you get a rush when you share John Oliver’s epic takedown of Trump. You’re not learning, it’s fun, it’s primal. When alt-right bloggers scream “Why is no one noticing that Muslim immigration is literally going to ruin Western Civilization” it’s the same thing. Of course, they might be right, but the challenge is for them to convince us that it’s not just a warped view of the world. That’s always the hurdle.

Simulation Thinking:

A fruitful place to start in answering these questions is hypothesizing how we could solve them if we had access to computers, information, and models, that don’t exist, but could theoretically exist. Most of our attempts to answer these high-dimensional questions with causal reasoning comes from what I think of as running simulations in our brain.

For example, despite vast differences and historical differences between countries like Russia, Iran, Venezuela, Saudi Arabia, and other oil nations, the ability for a government to gain money from oil, bypassing a tax paying base, has a clear pattern in allowing leaders to rely less on pressures of their populace. Trying to then understand how this interacts with specific parties and historical events then becomes messy, and often relies on our simulating different outcomes to try and estimate what seems most plausible given our knowledge of human preferences.

This might end up looking something like this: Russian journalists critical of the regime have a way of dying. Russian political history is a monstrosity, but one thing is that we know Russia gets oil revenue, and based on our economic analysis and observation of other countries who get oil revenue we observe that this means the leaders rely less on the citizens for tax. Now with our economic understanding, context specific details, and knowledge of past events, we simulate what the leaders would do based on our knowledge of the situation and their preferences. We can test aspects of this by trying to isolate abstractions and test those predictions in the future (i.e. how oil revenues for a state correlate to dead journalists). However, we will never truly be able to experimentally test the unique constellation of events that happened in Russia at those points in time. At least not unless the universe repeats.

We don’t give up though, we take it a step further. Are there abstractions related to past communism? To Putin’s personality traits? You get the idea. We then try to consider these abstractions in our brain based on our observations of similar events, as well as our own biological knowledge of human preferences and desires. We dump all this knowledge into our brain simulation. We know strong leaders are often paranoid. We know leaders often do not want their schemes to be exposed. We know economic aspects of Russia’s economy and so on. This makes sense to us because we can imagine being the journalist and wanting to uncover these events. We can imagine being a leader and wanting power.

We can computationally test bits and pieces of these problems by pulling economic data and modelling it, but these models are embedded within our brains contextual simulation. Ideally we would simulate this using a machine-learning model and a computer. Ideally this model would be a better version of the human brain. It would have the ability to make abstract connections, but with numerical precision where possible, as well as the ability to contain all information related to the question in its memory. It would then spot patterns within this matrix of information. It would do this by creating a dataframe that consists of every single piece of information

For the purpose of an example let’s assume a numerical mapping of country political and economic information to a high-dimensional matrix. Now let’s imagine that before certain political outcomes there is often a pattern of 1011001. The model would run a simulation that reads all the data on a few countries, and then uses that information to try and iteratively predict a new country in the test set one time-step ahead. For example, it might start on all data up until the 1990s. Then it starts getting readings on Hugo Chavez’s Venezuela. The training set notices the 1011001 pattern and starts generating political predictions. You’ll note that this is essentially what we do now with our very imperfect brain simulations. In fact, this only seems obvious because oil is obvious. How many other phenomena are as seemingly obvious as oil that we miss, because our brains suck at storing lots of information and considering interactions with more than a small handful of causal variables? 

Our brain simulations make another consistent mistake as well. They are able to make associations with everything, there is no imposed mathematical structure on the forms our associations have to take (i.e. highly non-linear), and we observe all the information at once. The outcomes and what came before. Without an imposed structure, this lets our brains fill in the gaps and assign attribution. This is in comparison to prediction, which imposes structure on our world. If you understand a phenomena, you need to be able to predict it, and any prediction you make today can’t be biased by already having observed the outcome.

Even workhorse regression models in the sciences frequently ignore this error. When you fit a model over an entire dataset, the model implicitly knows and is fitting everything that happened at once. This means that if you are fitting a model of data that has occurred over time, your parameters are estimated as though they observed everything at once. Yet this type of regression is endemic.

Few of us make predictions with clear criteria and evaluate their success. Every day pundits and anyone who writes on the world make thousands of predictions, yet every day we don’t see the after-the-fact evaluations of these predictions. How much better are they than tarot cards? Probably a little better, but a lot better? Well I don’t know, because they rarely, or selectively, keep track.

Part Six

So far we have gone over different classes of models, and flawed model reasoning. This is different from a biased world view, which focuses more on prior beliefs and data. Separating these two is useful, but again they are not truly distinct phenomena. For example, your model of the world might determine where you gather data, which then feeds back into your model. If we do assume though that first we gather data, and next we analyse that data, we can have some success in classifying differences in breakdowns of reasoning..

When people talk about ‘recognizing their bias’ it’s usually just a platitude when someone feels like there might be some reason others expect them to be wrong, even though they’re still sure they are right. What exactly is a bias? Jesse Hughes of the band Eagles of Death Metal–the band playing during the Paris terrorist attack–believes Islamic ideology is going to ruin France and Europe. I think we can both agree that it would be reasonable to call his opinion biased. But again, what exactly does this mean?

Bias means we disagree with their prior. It means that due to their experiences, we are going to discount their model of the world. Specifically, we are making the claim that their inference on the world is wrong because their experiences aren’t representative of what we believe are the expected or average experiences. If our brains were like my R code, we could exclude that data and re-run the model. The human brain has essentially a poor ability to exclude data and re-run analysis. Instead, if we really try, we get a flag that says “past data suggests your experiences would cause you to be biased.” I’m looking out my window right now at the Seattle skyline. If I saw a foreign terrorist crash a plane into a building in my city, there is a really good chance the output of my model on immigration would change. When the same thing happens but I don’t see it, the effect is less. That isn’t scientific reasoning, the fact that this is true should be cause for you to seriously question your convictions. More than you do now. Still, a little more.

This guy called Aumann created an agreement theorem, which argued two people who use a Bayesian methodology with common knowledge must agree on everything. The math on agreement theory is interesting, by playing around with some simple models where people are trying to determine some distribution with different prior information they can come to some neat conclusions. When varying priors or ambiguity is introduced to the model it is no longer possible to agree. We see this in practice, we tend to take on the priors of our professors. They might be the right priors, they might not be, but it’s no surprise that our view of the world is often in line with those we learn from.

Let’s take an open and already defined question off of the shelf and break it down. Right now immigration of Muslims from developing or high-conflict countries is the most emotionally charged topic in the US and EU. Small differences in predicted estimates can blow up in the limit. Let’s consider two outcomes at an extremely high level of abstraction (e.g. Muslim immigrants from Syria are far different from North Africa, and any serious policy debate must consider this). We will also assume we have a shared model of the world that wants to improve the quality of life for everyone in our country, and also help others when it doesn’t interact with the first point:
Outcome one is that over time a 100% of the immigrant Muslim population will assimilate and improve and contribute to our modern day Western culture and Protestant values at a net rate of 0.30% per year. The other group disagrees and thinks that while perhaps 95% will contribute, 5% will strongly detract, with a net benefit of -0.20%. A small variation in prior beliefs can explode the estimate.

In the world of mushy-computers coming to different estimates based on problems with huge variance, it’s just an issue of a slight difference in our estimated parameters. We use moral philosophy to codify lessons we’ve picked up from observing parameter estimates and their outcomes, and to try and build a comforting structure of right and wrong.

Small differences in parameter estimates can result in profound suffering.  From Iraqi sanctions in the 1990s by Madeline Albright, to George Bush’s invasion of Iraq, to the Obama administration in Syria and Clinton’s willingness to inject more entropy in the middle east, millions of people have died. We can’t observe the counterfactual, maybe no matter what we did millions were going to die. Saddam Hussein and Assad are horrible people. Would Assad have quickly ended the rebellion and regained control if Saudi Arabia and the US hadn’t armed ‘moderates’? I take that question seriously, since it’s really easy to write retrospective polemics that pick out 10 seconds of video footage and death count footnotes to make a point. Clinton’s emails encouraging the U.S. to overthrow Assad are funny though. It reads like what you would expect from a lawyer who has spent her entire life at worst politically scheming, and at best fighting for domestic issues. In another way, it reads like what you might expect a slightly out of touch CEO to write.

Our foreign policy has been that democracy and freedom are worth war, dictators are bad, and when we implant the idea they win in the long-run, and sovereign leaders who do not embrace this are betraying their citizens. Maybe Hillary Clinton and Madeleine Albright were right, and their tough choices that indirectly killed millions were for the greater good, and a Muslim ban is unequivocally evil, but you can forgive me for not being convinced. I don’t see evil here, I see a broken idea of what Democracy can promise a country, and our feverish attempt to apply it over and over. At least during the Cold War there was an existential fear that the other side was going to take over the world.

The right choice is to take a page out of Karl Popper’s book, and set up clear criteria for success and failure and run experiments when possible. The experiments of Muslim immigrants have been run for a few decades now, and still there is incredible disagreement — what gives? One reason could be all our data is from observational experiments, and if we had run a few natural experiments there would have been different outcomes. That probably wouldn’t gain widespread agreement. Even my asking this question is frowned upon. As it implies the answer isn’t obvious. And anyone who doesn’t see the obviousness of it must be biased. It also implies the question is valid. Perhaps Islam isn’t causal, but rather it is highly correlated with another set of variables that are causal. I don’t know, I’m not pretending to know.

This post isn’t about US foreign policy or Muslim immigration, it’s about our broken reasoning systems. Each side has its experts. Most people don’t know that much, but join a side. I don’t know that much about Muslim immigration. I do know that clusters of poor non-assimilating immigrants from Muslim countries now exist in major European cities, which is correlated with negative outcomes. I also know that certain classes of Muslim immigrants from highly educated, stable, and cosmopolitan cities are model citizens of the highest order. I know this implies that there are a set of variables that interact with religion to result in a complex outcome that lacks clear ways to attribute the causal drivers.

If you’re a scholar in the field you’ve probably spent tens of thousands of hours learning and reading on this topic. The problem is that if someone else has spent a similar amount of time learning, and is coming to a different conclusion, it’s hard enough to try and reconcile these differences.

However, if you believe your model of the world is correct, you need power to implement it. Whether you want to implement by teaching as a professor at Harvard, writing polemic blogs loosely related to economics for the NYtimes, working at a think tank in DC, working at the state department, or as a politician. There are many ways you can obtain power, but under all circumstances you need others to agree with your model, and look to you for more guidance. Or if you only want power, you find the existing models you think are most likely to catch on, and join that team. This isn’t an academic exercise though, it requires people to actually go out and campaign and build support. But if it did want to simplify it what would it look like?

You would want people to perceive your model as accurate, which is much different than to scientifically convince them your model is accurate. This can be done knowing that humans dislike uncertainty in high-dimensional causal spaces, and desperately want the world to made clear. More importantly, it’s humans want to belong to a side. These are models programmed into our brain at birth to prepare us for survival in a tribal world.

It is in the interests of each group to view themselves as distinct and different. There are so many great theories on rising polarization. Another possibility though is that in our relatively new country and form of government, the knowledge of how to best win an election has slowly built on itself. If I went back to the 1960s using my laptop and what are now relatively simple financial econometric models, I could have made millions of dollars. These improved strategies don’t happen in closed rooms with new research on psychology, they are small changes that are slowly filtered from decades of experiments in the 20th century.

And this is what I’m left with: Firstly, this post was too long. If I ever want anyone to read anything I write I need to be more succinct and not bite off more than I can chew. Second, Humans try to filter high-dimensional problems using our collective brain power, but we also form in-groups based on shared values, which can be thought of as prior views of the world previously filtered from past generations. We are an extended computational system interacting with an overly complex world.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s