Friday, October 24, 2008

Living in a simulation - part 2

Suppose you are living in a simulation and you wanted to discover the theory of everything. What would that theory be? Probably in your (simulated) mind it would be the set of laws that govern all physical phenomena in your (simulated) observable universe. You would also want to understand how your universe came about and where it will end up. Let's suppose that the programmer of your universe came up with a set of physical laws and let it run. As I discussed before, the programmer really can't be sure what will happen in his simulation but let's say he was inspired or lucky and hit upon something that led to a universe that produced an inhabitant that could ask about the theory of everything.

What then is the theory of everything? Well, one answer would be the set of physical laws that the programmer put in. Now suppose that the programmer didn't come up with any laws but just started off a cellular automaton (CA) with some rules and an initial condition. An example of a CA, which Steve Wolfram's book "A new kind of science" describes in great detail, is a one dimensional grid of "cells" that can be in one of two states. At each time step, each cell is updated according to what state it and its nearest neighbors are in. There are thus 2^8=256 possible rule sets, since each of the 8 configurations that 3 contiguous cells can have yields two possible updated states for the middle cell. Wolfram has ennumerated all of them in his book.

One of Wolfram's former employees, Matthew Cook, proved that rule 110 is a universal computer. Hence, all possible computations (simulations) can be obtained by running through all possible initial conditions. One of these initial conditions corresponds to the program with the same physical laws that the inspired programer came up with. However, in this case the physical laws will be an emergent phenomenon of the CA. What then is the theory of everything? Is it the the set of rules of the cellular automaton? Is it the combination of the rules and the initial condition? Is it still the set of emergent physical laws? In all likelihood, the elementary constituents of the emergent theory will be comprised of some number of cells. Below this scale, the emergent theory will no longer hold and at the very lowest level, there will be rule 110.

Finally, as has been pointed out previously, the inhabitants of a simulation can never know that they are in a simulation. Thus, there is really no way for us to know if we are living in a simulation. So what does that say about our theory of everything? Will it be an uber string theory or the CA rule and initial condition? Would we want to have a theory of everything that included a theory of the programmer and the programmer's world? This is why I've come to adopt the notion that a theory of everything is a theory of computation and that doesn't really tell us much about our universe.

Sunday, October 19, 2008

Absolute verus relative wealth

There is a debate among sociologists, political scientists and economists on whether or not absolute wealth or relative wealth is more important. There seems to be a trend recently that happiness is linked more to relative wealth than absolute wealth. The number of people who say they are happy has not gone up with a rise in standard of living, and in fact it may have even come down. Also, a paper in the journal Science last year reported that activation in brain areas related to reward responded more to relative differences in wealth than absolute amounts. I recall reading an article recently about Silicon Valley millionaires feeling poor and unsatisfied because of the billionaires in their neighbourhood. There was a difference between being rich and being "plane"-rich.

However, the current economic turmoil is uncovering a more complex (or maybe obvious) interaction at play. The anti-correlation between the performance of the economy and the likelihood of a Democratic US president seems to indicate that there is a threshold effect for wealth. Happiness does not go up appreciably above this threshold but certainly goes down a great deal below it. For people above this threshold, other factors start to play a role in their political decisions and sense of well being. However, when you are below this threshold then the economy is the dominant issue.

This may be why the growth of income disparity did not create that much outcry over the past decade or so. When the majority of the population was above their wealth comfort threshold, they didn't particularly care about the new gilded age since the rich were largely isolated from them. It mostly caused unease among the rich that weren't keeping up with the super-rich. However, when the majority finally fell below their comfort threshold, the backlash came loud and strong. Suddenly, everyone was a populist. However, when (if?) the economy rights itself again then this regulatory fervor will subside in kind. The general public will tune out once again and the forces that pushed for policies favourable to unequal growth will dominate the political discourse.

The system may always be inherently unstable. Suppose that fervor for political activism and where you sit on the left-right divide are uncorrelated but have approximately equal representation. We can then divide people into four types - Active/Left, Active/Right, Nonactive/Left and Nonactive/Right. Also assume that when the economy is doing poorly, all the left are motivated but when the economy is doing well then all the right are motivated. A graph in the New York times yesterday showed that stock market growth was higher when democrats are in office, even when you don't count Herbert Hoover, who was in power during the crash of 1929. So let's assume that when the left is in power there is more total economic growth and less income disparity and when the right is in power there is less total growth but more income disparity. The nonactive fractions of the left and right determine the policy. When things are going well in the economy, the Nonactive/Left relax but the Nonactive/Right become motivated. Thus we have half the population pushing for more right leaning policies countered by only a quarter of the population, namely the Active/Left. This then leads to the right attaining power resulting in a widening of income disparity. When enough people fall below the wealth threshold, the Nonactive/Left become engaged while the Nonactive/Right disengages, which then allows the Left to come back into power. The interesting things is that the only way to break this cycle is for the right to enact policies that keep everyone above threshold. For the other stable fixed point, namely left wing policies failing completely and keeping everyone down would eventually lead to a breakdown of the system.

Friday, October 17, 2008

Genetic basis for political orientation

I was listening to a podcast of Quirks and Quarks yesterday that featured an interview of political scientist James Fowler on his recent work showing that the likelihood to vote was partially genetic. (Fowler is the same person who recently argued in New England Journal of Medicine paper that obese people tend to have obese friends.) The likelihood that genes may play a role in politics has come up before most notably in a paper by John Alford, Carolyn Funk and John Hibbing in 2005 that argued that political leanings are heritable. That study looked at identical and fraternal twins and found that the heritability of political ideology was about 50%. The work didn't say that genes could predict party affiliation just how a person stood on the left-right divide on a number of issues. Fowler hypothesized that the reason politics has a genetic basis is that back in our hunter-gatherer days, figuring out how to divide the spoils of a hunt would be important to the survival of the troop.

Following Fowler, I can imagine how early humans could take two approaches to how to divide up a downed mastodon. The paleo-leftists would argue that the meat should be shared equally among everyone in the tribe. The rightwingers would argue that each tribe member's share should be based solely on how much they contributed to that hunt. My guess is that any ancient group that had approximately equal representation of these two opposing views would outcompete groups that had unanimous agreement of either viewpoint. In the rightwing society, the weaker members of the group simply wouldn't eat as much and hence would have a lesser chance of survival reducing the population and diversity of the group. The result may be a group of excellent hunters but perhaps they won't be so good at adapting to changing circumstances. Now in the proto-socialist group, the incentive to go out and hunt would be reduced since everyone would eat no matter what. This might make hunts less frequent and again weaken the group. The group with political tension may compromise on a solution where everyone gets some share of the spoils but there would be incentives or peer pressure to contribute. This may be why genes for left and right leanings have both persisted.

If this is true, then it would imply that we may always have political disagreement and the pendulum will continuously swing back and forth between left and right. However, this doesn't imply that progress can't take place. No one in a modern society tolerates slavery even though that was the central debate a hundred and fifty years ago. Hence, progress is made by moving the center and arguments between the left and the right lead to fluctuations around this center. A shrewd politician can take advantage of this fact by focusing on how to frame an issue instead of trying to win an argument. If she can create a situation where two sides argue about a tangential matter to the pertinent issue than the goal can still be achieved. For example, suppose a policy maker wanted to do something global warming. Then the strategy should not be to go out and try to convince people on what to do. Instead, it may be better to find a person on the opposite political spectrum (who also wants to do something about global warming) and then stage debates on their policy differences. One side could argue for strict regulations and the other could argue for tax incentives. They then achieve their aim by getting the country to take sides on how to deal with global warming, instead of arguing about whether or not it exists.

Saturday, October 11, 2008

Complexity of art and science

There seems to be a consensus that art cannot be compressed. A plot summary of Hamlet is not the same as Hamlet. A photo of Picasso's Guernica is not the same as the actual painting in Madrid. Music is a particulary interesting case. A Bach partita can be written down in a few thousand bits but reading the music is not the same as hearing it played by Heifetz or Menuhin. One could even argue that one performance by the same artist is not the same as a recording or even another performance.

This is in contrast to Science. We all know the theory of evolution but most of us have never read Darwin's On the Origin of Species. The three volumes of Newton's Principia Mathematica can now be reduced to F = ma and F = G m1 m2/r^2. Obviously, it takes some concerted study to understand these equations but one doesn't need to read Newton to do so. It is interesting that scientists tend to worry a lot about priority of a discovery while they are alive but unless their name is directly associated with a concept, theorem or equation, the provenance of many scientific ideas tend to get lost. Quantum mechanics is often taught before classical mechanics now so most starting students have no idea why the energy function is called a Hamiltonian. The concept of the conservation of energy is so natural to scientists now that most people don't realize how long it took to be established and who were the main players.

If art is not compressible then we can interpret the complexity of the brain in terms of the complexity of art. The complete works of Shakespeare runs a little over 1200 pages. Estimating 5000 characters per page and 8 bits per character leads to a total size of less than 50 million bits, which is not very much compared to the hard drive on your computer. Charles Dickens was much more prolific in terms of words generated. Bleak House alone is over 1000 pages. I haven't counted all the pages of all twenty plus novels but let's put his total output at say a billion bits.

If art is incompressible then that means there could not be an algorithm smaller than a billion bits that could have generated the work of Dickens. This would put a lower bound on the complexity of the "word generation" capabilities of the brain. Now perhaps if you are uncharitable (like some famous authors have been), you could argue that Dickens had a formula to generate his stories and so the complexity is actually less. One way to do this would be to take a stock set of themes, plots, characters, phrases and so on and then randomly assemble them. Some supermarket romances are supposedly written this way. However, no one would argue that they compare in anyway to Dickens, much less Shakespeare. Given that the Kolmogorov complexity is uncomputable we can never know for sure if art is compressible. So a challenge to computer scientists is to write a program that can generate literature with a program shorter than the work itself.

Friday, October 03, 2008

Modeling the financial crisis

There is an interesting op-ed piece in the New York Times this week by physicist and science writer Mark Buchanan on predicting the current financial crisis. His argument is that traditional economists were unable to predict or handle the current situation (Nouriel Rubini notwithstanding) since their worldviews are shaped by equilibrium theorems, which unfortunately are either incomplete or wrong. Buchanan writes:

Well, part of the reason is that economists still try to understand markets by using ideas from traditional economics, especially so-called equilibrium theory. This theory views markets as reflecting a balance of forces, and says that market values change only in response to new information — the sudden revelation of problems about a company, for example, or a real change in the housing supply. Markets are otherwise supposed to have no real internal dynamics of their own. Too bad for the theory, things don’t seem to work that way.

Nearly two decades ago, a classic economic study found that of the 50 largest single-day price movements since World War II, most happened on days when there was no significant news, and that news in general seemed to account for only about a third of the overall variance in stock returns. A recent study by some physicists found much the same thing — financial news lacked any clear link with the larger movements of stock values.

Certainly, markets have internal dynamics. They’re self-propelling systems driven in large part by what investors believe other investors believe; participants trade on rumors and gossip, on fears and expectations, and traders speak for good reason of the market’s optimism or pessimism. It’s these internal dynamics that make it possible for billions to evaporate from portfolios in a few short months just because people suddenly begin remembering that housing values do not always go up.

Really understanding what’s going on means going beyond equilibrium thinking and getting some insight into the underlying ecology of beliefs and expectations, perceptions and misperceptions, that drive market swings.

He then goes on to describe the work of some pioneers who are trying to model the actual dynamics of markets. A Yale economist with two physicists (Doyne Farmer being one of them) used an agent-based model to simulate a credit market. They found that as the leverage (amount of money borrowed to amplify gains) increases there is a phase transition or bifurcation from a functioning credit market to an unstable situation that results in a financial meltdown.

I found this article interesting on two points. The first is the attempt to contrast two worldviews: the theorem proving mathematician economist versus the computational physicist modeler. The second is the premise that the collective dynamics of a group of individuals can be simpler than the behavior of a single individual. A thousand brains may have a lower Kolmogorov complexity than a single brain. My guess is that biologists (Jim Bower?) may not buy this. Although, my worldview is more in line with Buchanan's, in many ways his view is on less stable ground than traditional economics. With an efficient market of rational players, you can at least make some precise statements. Whereas with the agent-based model there is little understanding as to how the models scale and how sensitive the outcomes depend on the rules. Sometimes it is better to be wrong with full knowledge than be accidentally right.

I've always been intrigued by agent-based models but have never figured how to use them effectively. My work has tended to rely on differential equation models (deterministic and stochastic) because I generally know what to expect from them. With an agent-based model, I don't have a feel for how they scale or how sensitive they are to changes in the rules. However, this lack of certainty (which also exists for nonlinear differential equations, just look at Navier-Stokes for example), may be inherent in the systems they describe. It could simply be that some complex problems are so intractable that any models of them will rely on having good prior information (gleened from any and all sources) or plain blind luck.