Saturday, August 02, 2008

Penrose redux

In 2006, I posted my thoughts on Roger Penrose's argument that human thought must be noncomputable. Penrose's argument follows from the fact that Godel's incompleteness theorem states that there exist true statements in a consistent formal system (e.g. arithmetic with integers) that cannot be proved within that system. The proof basically boils down to showing statements like "this statement cannot be proved" are true but cannot be proved because if they could be proved then there would be an inconsistency with the system. Turing later showed that this was equivalent to saying that there are problems, known as undecidable or uncomputable problems, that a computer could not solve. From these theorems, Penrose draws the conclusion that since we can recognize unprovable statements are true then we must not be a computer.

My original argument refuting Penrose's claim was that we didn't really know what formal system we were using or whether or not it remained fixed so we couldn't know if we were recognizing true statements that we can't prove. However, I now have a simpler argument, which is simply that no human has ever solved an uncomputable problem and hence has not shown they are more than a computer. The fact that they know about uncomputability is not an example. A machine could also have the same knowledge since Godel's and Turing's proofs (as are all proofs) are computable. Another way of staying this is that any proof or thing that can be written down in a finite number of symbols could also be done by a computer.

An example is the fact that you can infer the existence of real numbers using only integers. Thus, even though real numbers are uncountable and thus uncomputable, we can prove lots of properties about then just using integers. The Dedekind cut can be used to prove the completeness of real numbers without resorting to the axiom of choice. Humans and computers can reason about real numbers and physical theories based on real numbers without actually ever having to deal directly with real numbers. To paraphrase, reasoning about uncomputable problems is not the same as solving uncomputable problems. So until a human being can reliably tell me whether or not any Diophantine equation (polynomial equation with integer coefficients) has a solution in integers (i.e. Hilbert's tenth problem) or always know if any program will ever halt (i.e. the halting problem), I'll continue to believe that a computer can do whatever we can do.

Sunday, July 27, 2008

Limits to thought and physics

In a recent post, I commented on Reed Montague's proposal that it may be necessary to account for the limits of psychology when developing theories of physics. I disagreed with his thesis because if we accept that physics is computable and the brain is described by physics then any property of physics should be discernible by the brain. However, I should have been more careful in my statements. Given the theorems of Godel and Turing, we must also accept that there may be certain problems or questions that are not decidable or computable. The most famous example is that there is no algorithm to decide if an arbitrary computation will ever stop. In computer science this is known as the halting problem. (Godel's incompleteness theorems are direct consequences of the halting problem, although historically they came first). The implication is quite broad for it also implies that there is no sure fire way of knowing if a given computation will do a particular thing (i.e. print out a particular symbol). This is also why there is no certain way of ever knowing if a person is insane or if a criminal will commit another crime, as I claimed in my post on crime and neuroscience.

Hence, it may be possible that some theories in physics, like the ultimate super-duper theory of everything, may in fact be undecidable. However, this is not just a problem of psychology but also a problem of physics. Some, like British mathematical physicist Roger Penrose, would argue that the brain and physics are actually not computable. Penrose's arguments are outlined in two books - The Emperor's New Mind and Shadows of the Mind. It could be that the brain is not computable but I (and many others for various reasons) don't buy Penrose's argument for why it is not. I posted on this topic previously although I've refined my ideas considerably since that post.

However, even if the brain and physics were not computable there would still be a problem because we can only record and communicate ideas with a finite number of symbols and this is limited by the theorems of Turing and Godel. It could be possible that a single person could solve a problem or understand something that is undecidable but she would not be able to tell anyone else what it is or write about it. The best she could do is to teach someone else how to get into such a mental state to "see" it for themselves. One could argue that this is what religion and spiritual traditions are for. Buddhism in a crude sense is a recipe for attaining nirvana (although one of the precepts is that trying to attain nirvana is a surefire way of not attaining it!). So, it could be possible that throughout history there have been people that have attained a level of, dare I say, enlightenment but there is no way for them to tell us what that means exactly.

Friday, July 18, 2008

Neural Code

An open question in neuroscience is: what is the neural code? By that it is meant, how is information represented and processed in the brain. I would say that the majority of neuroscientists, especially experimentalists, don't worry too much about this problem and implicitly assume what is called a rate code, which I will describe below. There is then a small but active group of experimentalists and theorists who are keenly interested in this question and there is a yearly conference, usually at a ski resort, devoted to the topic. I would venture to say that within this group - who use tools from statistics, Bayesian analysis, machine learning and information theory to analyze data obtained from in vivo multi-electrode recordings of neural activity in awake or sedated animals given various stimuli - there is a larger amount of skepticism towards a basic rate code than the general neural community.

For the beneficiary of the uninitiated, I will first give a very brief and elementary review of neural signaling. The brain consists of 10^11 or so neurons, which are intricately connected to one another. Each neuron has a body, called the soma, an output cable, called the axon, and input cables, called the dendrites. Axons "connect" to dendrites through synapses. Neurons signal each other with a hybrid electro-chemical scheme. The electrical part involves voltage pulses called action potentials or spikes. The spikes propagate down axons through the movement of ions across the cell membrane. When the spikes reach a synapse, they trigger a release of neurotransmitters, which diffuse across the synaptic cleft, bind to receptors on the receiving end of the synapses and induce either a depolarizing voltage pulse (excitatory signal) or a hyperpolarizing voltage pulse (inhibitory signal). In that way, spikes from a given neuron can either increase or decrease the probability of spikes in a connected neuron.

The neuroscience community is basically all in agreement that neural information is carried by the spikes. So the question of the neural code becomes: how is information coded into spikes? For example, if you look at an apple, something in the spiking pattern of the neurons in the brain is representing the apple. Does this change involve just a single neuron? This is called the grandmother cell code, from the joke that there is a single neuron in the brain that represents your grandmother. Or does it involve a population of neurons, known not surprisingly, as a population code. How did the spiking pattern change? Neurons have some background spiking rate, so do they simply spike faster when they are coding for something, or does the precise spiking pattern matter. If it is just a matter of spiking faster then this is called a rate code, since it is just the spiking rate of the neuron that contains information. If the pattern of the spikes matter then it is called a timing code.

The majority of neuroscientists, especially experimentalists, implicitly assume that the brain uses a population rate code. The main reason they believe this is because in most systems neuroscience experiments, an animal will be given a stimulus, and then neurons in some brain region are recorded to see if any respond to that particular stimulus. To measure the response they often count the number of spikes in some time window, say 500 ms, and see if it exceeds some background level. What seems to be true from almost all of these experiments is that no matter how complicated a stimulus you want to try, a group of neurons can usually be found that respond to that stimulus. So, the code must involve some population of neurons and the spiking rate must increase. What is not known is which and how many neurons are involved and whether or not the timing of the spikes matter.

My sense is that the neural code is a population rate code but the population and time window change and adapt depending on context. Thus understanding the neural code is no simpler than understanding how the brain computes. In molecular biology, deciphering the genetic code ultimately led to understanding the mechanisms behind gene transcription but I think in neuroscience it may be the other way around.

Tuesday, July 08, 2008

Crime and neuroscience

The worry in the criminal justice system is that people will start trying to use neuroscience in their defense. For example, they will start to claim that their brain was faulty and it committed the crime. I think some have already tried this. I think the only way out of this predicament is to completely reframe how we administer justice. Currently, the intent of the perpetrator to commit the crime (negligence included as a crime) is required to establish criminal activity. This is why insanity is a viable defense. I think this notion will gradually become obsolete as the general public comes to accept the mechanistic explanation of mind. When the discontinuity between man and machine is finally accepted by most people, the question of intent is going to be problematic. That is why we need to start rethinking this whole enterprise now.

My solution is that we should no longer worry about intent or even treat justice as a form of punishment. What we should do in a criminal trial is to determine if the defendant a) actually participated in the crime and b) if they are dangerous to society. By this reasoning, putting a person in jail is only necessary if they are dangerous to society. For example, violent criminals would be locked up and they would only be released if it could be demonstrated that they were no longer dangerous. This schema would also eliminate the concept of punishment. The duration of a jail sentence should not be about punishment but only about benefit to society. I don't believe that people should be absolved of their crimes. For nondangerous criminals, some form of retribution in terms of garnished wages or public service could be imposed. Also, if some form of punishment can be shown to be a deterrent then that would be allowed as well. I'm sure there are many kinks to be worked out but what I am proposing is that a fully functional legal system could be established without requiring a moral system to support it.

This type of legal system will be necessary when machines become sentient. Due to the theorems of Godel and Turing, proving that a machine will not be defective in some way will be impossible. Thus, some machines may commit crimes. Given that they are sentient also means that we cannot simply go around and disconnect machines at will. Each machine deserves a fair hearing to establish guilt and sentencing. Given that there will not be any algorithmic way to establish with certainty whether or not the machine will repeat the crime, justice for machines will have to be administered in the same imperfect way it is administered for humans.

Friday, July 04, 2008

Oil Consumption

I thought it would be interesting to convert the amount of oil consumed by the world each year into cubic kilometres to give a sense of scale of how much oil we use and how much could possibly be left. The world consumes about 80 million barrels of oil a day. Each barrel is 159 litres and a cubic kilometre corresponds to 10^12 litres, so the amount of oil the world consumes in a year is equivalent to 4.6 cubic kilometres. This would correspond to a cubic pit with sides that are about 1.45 km long. The US consumes a little over a quarter of the world's oil, which is about 1.2 cubic kilometres. The Arctic National Wildlife Refuge is estimated to have about 10 billion barrels of recoverable oil, about a half of year's supply.

Proven world reserves amount to about 1.3 trillion barrels of oil or about 200 cubic kilometres. If we continue at current rates of consumption, then we have about 40 years of oil left. If the rest of the world decided to use oil like American's then the world's yearly consumption could increase by a factor of 5, which would bring us down to 8 years worth of reserve. However, given that the surface of the earth is about 500 million square kilometres, it seems plausible that there is a lot more oil out there that hasn't been found, especially under the deep ocean. The main constraint is cost and greenhouse gas emissions. We may not run out of oil anytime soon but we may have run out of cheap oil already.

Tuesday, June 17, 2008

Physics and psychology

Montague in the epiologue of his book, which I blogged about recently, argued that a marriage of psychology and physics is in order. His thesis is that our intuitions about the world are based on flawed systems that were only designed to survive and reproduce. There are several responses to his argument. The first is that while we do rely on intuition to do math and physics, the intuition is based on learned concepts more than our "primitive" intuitions. For example, logical inference itself is completely nonintuitive. Computer scientist Scott Aaronson gives a nice example. Consider cards with a letter on one side and a number on the other. You are given 2, K, 5, J and the statement every J has a 5 on the back. Which cards do you need to turn over to see if the statement is true? (Hint: If A implies B then the only conclusion you can draw is not B implies not A). Most college freshmen get this wrong. Quantum mechanics and thermodynamics are notoriously counter intuitive and difficult to understand. We were led to these theories only after doing careful experiments that could marginalize away our prior beliefs.

However, that is not to say that perhaps we're at a stumbling block over questions like what is dark matter or why do we remember the past and not the future because of some psychological impediment. A resolution to this issue could reside again on whether or not physics is computable. Montague doesn't think so but his examples do not constitute a proof. Now if physics is computable and the brain is governed by the natural laws of physics, then the brain is also computable. In fact, this is the simplest argument to refute all those that doubt that machines can ever think. If they believe that the brain is in the natural world and physics can be simulated then we can always simulate the brain and hence the brain is computable. Now if the brain is computable, then any phenomenon in physics can be understood by the brain or at least computed by the brain. In other words, if physics is computable then given any universal Turing machine, we can compute any physical phenomenon (given enough time and resources).

There is one catch to my argument and that is the fact that if we believe the brain is computable then we must also accept that it is finite and thus less powerful than a Turing machine. In that case, there could be computations in physics that we can't understand with our finite brains. However, we could augment our brains with extra memory (singularity anyone) to complete a computation if we ever hit our limit. The real question is again tractability. It could be possible that some questions about physics are intractable from a purely computational point of view. The only way to "understand" these things is to use some sort of meta-reasoning or some probabilistic algorithm. It may then be true that the design of our brains through evolution may impede our ability to understand concepts that are outside of the range it was designed for.

Personally, I feel that the brain is basically a universal Turing machine with a finite tape so that it can do all computations up to some scale. We can thus only understand things with a finite amount of complexity. The way we approach difficult problems is to invent a new language to represent complex objects and then manipulate the new primitives. Thus our day to day thinking uses about the same amount of processing but accumulated over time we can understand arbitrarily difficult concepts.

Tuesday, June 10, 2008

Why so slow?

John Tierney of the New York times shows a figure from Ray Kurzweil of a log-log plot of the time between changes in history, such as the appearance of life multicellular organisms to new technologies like televisions and computers. His graph shows power law scaling with an exponent of negative one, which I obtained by eyeballing the curve. In other words, if dT is the time between the appearance of the next great change then it scales as 1/T where T is the time. I haven't read Kurzweil's book so maybe I'm misinterpreting the graph. The fact that there is scaling over such a long time is interesting but I want to discuss a different point. Let's take the latter part of the curve regarding technological innovation. Kurzweil's argument is that the pace of change is accelerating so we'll soon be enraptured in the Singularity (see previous post). However, the rate of appearance of new ideas seems to be only increasing linearly with T. So the number of new ideas are accumulating as T^2, which is far from exponential. Additionally, the population is increasing exponentially (at least in the last few hundred years). Hence the number of ideas per person is obeying t^2 Exp(-t). I'm not sure where we are on the curve but after an initial increase, the number of ideas per person actually decreases exponentially. I was proposing in the last post that the number of good ideas was scaling with the population but according to Kurzweil I was being super optimistic. Did I make a mistake somewhere?

Friday, June 06, 2008

The singularity

Steve Hsu points to an IEEE Spectrum special report on The Singularity, which is variously described as the point in time when machine intelligence surpasses human intelligence so that machines can then go on to improve themselves without our help. The super optimists like Ray Kurzweil think this will happen in 15 years and when it happens machine intelligence will increase at doubling times that will be hours, minutes or shorter. The machines will then solve all of the world's problems instantly. He also thinks we can upload our minds into the machine world and be immortal.

Personally, I think that there is no question that humans are computable (in the Turing sense) so there is no reason that a machine that can think like we do will someday exist (we being an existence proof). I have no idea when this will happen. Having studied computational neuroscience for about 15 years now, I can say that we aren't that much closer to understanding how the brain works then we were back then. I have a personal pet theory, which I may expound on sometime in the future, but it's probably no better than anyone else's. I'm fairly certain machine intelligence won't happen in 15 years and may not in my lifetime.

The argument that the singularity enthusiasts use is a hyper-generalization of Moore's law of exponential growth in computational power. They apply the law to every thing and then extrapolate. For example, anything that doubles each year (which is a number Kurzweil sometimes uses) will improve by a factor of 2^15=32,000 in 15 years. To Kurzweil, we are just a factor of 2^15 away from singularity.

There are two quick responses to such a suggestion, the first is where did 2^15 come from and the second is nonlinear saturation. I'll deal with the second issue first. In almost every system I've ever dealt with there is usually some form of nonlinear saturation. For example, some bacteria can double every 20 minutes. If it weren't for the fact that they run out of food eventually and stop growing (i.e. nonlinear saturation) a single colony would cover the earth in less than a week. Right now components on integrated circuits are less than 100 nm in size. Thus in less than 10 doublings they will be smaller than atoms. Hence, Moore's law as we know it can only go on for another 20 years at most. To continue the pace beyond that, we will require a technological paradigm shift and there is no successor on the horizon. The singularists believe in the Lone Ranger hypothesis so something will come to the rescue. However, even if computers do get faster and faster, software is not improving at anything near the same pace. Arguably, Word is worst now then it was 10 years ago. My computer still takes a minute to turn on. The problem is that good ideas don't seem to be increasing exponentially. At best they only seem to scale linearly with the population.

That leads us to the first point. How far away are we from building a thinking machine? The answer is that we haven't a clue. We may just need a single idea or it might be several. Over the past 50 years or so we've really only had a handful of truly revolutionary ideas about neural functioning. We understand a lot about the machinery that the brain runs on but very little about how it all works together to create human intelligence. We are making progress but it's slow. However, nonlinearity could help us here because we may be near a bifurcation to take us to a new level of understanding. However, this is not predictable by exponential growth.

The other thing about the singularity is that the enthusiasts seem to think that intelligence is unlimited, so that thinking machines can instantly solve all of our problems. Well if physics is computable (see previous post), then no amount of intelligence can solve the Halting problem or Hilbert's tenth. If we believe that P is not equal to NP, then no hyper intelligence can solve intractable problems, unless the claim extends to the ability to compute infinitely fast. I would venture that no amount of intelligence will ever settle the argument of who was the greatest athlete of all time. Many of our problems are due to differences in prior beliefs and that can't be solved by more intelligence. We have enough wealth to feed everyone on the planet yet people still starve. Unless the singularity implies that machines control all aspects of our lives, there will be human problems that will not be solved by extra intelligence.

The example enthusiasts sometimes give of a previous singularity is the emergence of humans. However, from the point of view of the Dodo bird, humpback whale, buffalo, polar bear, American elm tree, and basically most other species, life got worse following the rise of the humans. We're basically causing the greatest extinction event since the fall of the dinosaurs. So who's to say that when the machine singularity strikes, we won't be left behind similarly. Machines may decide to transform the earth to suit their needs and basically ignore ours.

Friday, May 30, 2008

Are humans computable?

I picked up Read Monatgue's book on decision making - Why Choose This Book?: How We Make Decisions, this evening at Barne's and Noble and read the epilogue with the provacative title: Are Humans Computable? In this chapter, Montague puts forward the argument that protein-protein interactions are not computable and hence neither is the brain or physics for that matter. He argues that proteins and worldly stuff have physical properties that are not computable but can be strung together algorithmically to achieve an end, such as the brain. As an example, he suggests that writing down the equations that govern a nuclear reactor do not suddenly create energy because the equations lack the "physical properties" necessary for a chain reaction. His final point is that our intuitions are based on a brain that is only designed to survive and reproduce and thus physics is ineluctably intertwined with psychology. He proposes that the next frontier is to incorporate the limitations of human perception into new physical theories.

I have been reading and thinking about the theory of computation lately. I have a host of incomplete and inchoate ideas on the topic that are not ready for prime time but after reading Montague's chapter I thought it would be useful to put some things down before I forget them. Montague has touched on some interesting and deep questions but I believe his particular point of view is flawed. He incorrectly conflates intractability with uncomputability and an algorithm with a computation. This post will only touch briefly on the very many issues related to this topic.

On protein-protein interactions, Montague writes that the totality of possible interactions is unimaginably large and thus could never fit on a realizable computer. He equates this with being uncomputable. Protein-protein interactions may not be computable but not because it can't fit on a computer. A problem is deemed uncomputable or undecidable if a computer (i.e. Turing machine) cannot solve (decide) it. This means that the problem cannot be solved by any algorithm. Perhaps Montague knows this but his editor forced him to tone down the technical details. The most famous undecidable problem is the halting problem, which says that there is no algorithm that can tell if a computation will ever stop. Hilbert's tenth problem on the solvability of diophantine equations with integers is also undecidable. It is not known if protein-protein interactions and by implication all of physics is decidable but almost everyone in physics and applied math believes it is so, whether they know this or not. One notable exception is Roger Penrose who believe that quantum gravity is uncomputable, but that is another story for another post.

The question is not as trivial as it sounds. Kurt Godel, Alonzo Church, Alan Turing, and many other twentieth century mathematicians established the criteria for computability and I hope to get to their ideas in more depth in future posts. However, for now in a nutshell, the question of computability comes down to the cardinality of the set you are trying to compute. If the set of possible outcomes of a problem is countable, then the problem is computable. If it is not countable, then it is not. Now to physics: If we believed that space-time were a true continuum (i.e. described by real numbers) then the set of all possible configurations of two proteins would be uncountable and Montague would be correct that the dynamics are uncomputable.

However, there are two very plausible responses to this problem. The first is that although real numbers are uncomputable, they can be approximated arbitrarily closely by countable rational numbers. This is the foundation of numerical analysis, which shows that any continuous dynamics can be approximated arbitrarily well with a discretized system. That's how we do numerical simulations for weather prediction, airflow over a an airplane wing, and even protein-protein interactions. The reason that numerical simulations work is that there is an underlying smoothness to the dynamics so we can approximate it by a finite set of points. In essence, we can predict what will happen next for short enough times and distances. Now, this need not be true. It could be that space-time is chaotic at small scales so that no discretization can approximate it. This is proposed in theories of quantum gravity. However, even if that were the case there is probably some averaging over larger scales that effectively smooths the underlying turbulence, (think the uncertainty principle), to make physics effectively computable and that is what we deal with on a day to day basis.

The second way to argue for the computability of the universe is that the entropy of the observable universe is finite. Entropy is basically the logarithm of the number of microstates. Thus if entropy is finite then the number of possible configurations of the universe is countable (actually finite) so again physics is computable. Why is the entropy of the universe finite? Again, think quantum mechanics. If space-time is quantized then there will be a smallest scale, namely the Planck length which is about 10^-35 metres.

However, Montague does have a point that a simulation of the interactions of two or more proteins could be intractable in that it would take an immense amount of memory or time to do a computation. The field of algorithmic complexity examines questions of intractability of which the most famous problem is whether or not P = NP. I don't have time to go into that one but I hope to post more on that later as well. So, while we may never be able to simulate the brain, that doesn't mean the brain is not computable, it's just intractable. However, intractability doesn't imply that we couldn't build an artificial brain.

I think I'll save my comments on Montague's other points in a future post.

Wednesday, April 09, 2008

Transients in dynamical systems

I've just posted a paper to arXiv.org entitled "Competition between transients in the rate of approach to a fixed point", by Judy Day, Jonathan Rubin and Carson C. Chow. The paper examines how long it takes to approach a fixed point. The problem was motivated by a biological phenomenon known as tolerance, where the body's inflammatory response to a noxious stimulus is attenuated by a pre-exposure to that substance. We translated this problem into the question of given two orbits, under what conditions would one orbit "pass" another. We show that using general properties of the continuity of orbits and the concept of inhibition, a set of conditions for when tolerance can and cannot exist can be established. Transient dynamics have not been well studied and this paper represents an approach into the issue.

Tuesday, April 01, 2008

Multiple scale analysis

I recently wrote a scholarpedia entry on multiple scale analysis. It is a useful tool of applied mathematics.

Just Published

Our paper "The Dynamics of Human Body Weight Change" just appeared in PLoS Computational Biology.

Friday, February 22, 2008

The dynamics of human body weight change

I've just posted a new paper to the quantitative biology archive: Carson C. Chow and Kevin D. Hall, The dynamics of human body weight change (arXiv:0802.3234). Understanding the dynamics of weight change has important consequences for conditions such as obesity, cancer, AIDS, anorexia and bulimia nervosa. While we know that changes of body weight result from imbalances between the energy derived from food and the energy expended to maintain life and perform physical work, quantifying this relationship has proved difficult. Part of the difficulty stems from the fact that the body is comprised of multiple components and we must quantify how weight change is reflected in terms of alterations of body composition (i.e. fat versus lean mass). In this paper, we show that a model of the flux balances of macronutrients, namely fat, protein and carbohydrates, can provide a general description of the way the body weight will change over time. Under general conditions, the model can be reduced to a two dimensional system of fat and lean body masses, which then can be analyzed in the phase plane. For a fixed food intake rate and physical activity level, the body weight and body composition will approach a steady state. However, the steady state can correspond to a unique body weight (fixed point) or a continuum of body weights (invariant manifold) depending on how fat oxidation depends on the body weight and composition changes. Interestingly, the existing experimental data on human body weight dynamics cannot presently distinguish between these two possibilities. However, this distinction is important for the efficacy of clinical interventions that alter body composition and mass.

Wednesday, December 05, 2007

Finite size neural networks

A paper by Hedi Soula and myself on the dynamics of a finite-size neural network (Soula and Chow, Neural Comp, 19:3262 (2007)) is out this month. You can also download it from my homepage. As you may infer, I've been preoccupied with finite-size effects lately. In this paper we consider a very simple Markov model of neuronal firing. We presume that the number of neurons that are active in a given time epoch depends only on the number that were active in the previous epoch. This would be valid in describing a network with recurrent excitation for example. Given this model we can calculate the equilibrium probability density function for a network of size N directly and from that all statistical quantities of interest. We also show that the model can describe the dynamics of a network of more biophysically plausible neuron models. The nice thing about our simple model is that we can then compare the exact results to mean field theory, which is the classical way of studying the dynamics of large numbers of coupled neurons. We show that the mean activity is generally well described by mean field theory, except near criticality as expected, but the variance is not. We also show that the network activity can possess a very long correlation time although the firing statistics of a single neuron does not.

Monday, November 26, 2007

Kinetic Theory of Coupled Oscillators

I have recently published two papers applying ideas from the kinetic theory of plasmas and nonequilibrium statistical mechanics to coupled oscillators. The first is Hildebrand, Buice and Chow, PRL 98:054101 and the second is Buice and Chow, PRE 76:031118. The main concern of both papers is understanding the dynamics of large but not infinite networks of oscillators. Generally, coupled oscillators are studied either in the small network limit where explicit calculations can be performed or in the infinite size "mean field" limit where fluctuations can be ignored. However, many networks are in between, i.e. large enough to be complicated but not so large that the effects of individual oscillators are not felt. This is the regime we were interested in and where the ideas of kinetic theory are useful.

In a nutshell, kinetic theory strives to explain macroscopic phenomenon of a many body system in terms of (the moments of the distribution function governing) the microscopic dynamics of the constituent particles (oscillators). In the coupled oscillator case, we actually have a macroscopic theory and what we want to understand is how the microscopic dynamics gave rise to that theory. For example, in the Kuramoto model of coupled oscillators, there is a phase transition from asynchrony to synchrony if the coupling strength is sufficiently strong. In the infinite oscillator limit where fluctuations can be ignored, an order parameter measuring the synchrony in the network can be shown to bifurcate from zero at a critical coupling strength. However, for a finite number of oscillators, the order parameter fluctuates and there is no longer a sharp transition from asynchrony to synchrony but rather a crossover. We show in the first paper that a moment expansion analogous to the BBGKY hierarchy can be derived for the coupled oscillator system and using a Leonard-Balescu-like approximation the variance of the order parameter can be computed explicitly.

The second paper shows that the moment hierarchy can be equivalently expressed in terms of a generating functional of the oscillator density (i.e. a density of the density if you like). Once expressed in this form, diagrammatic methods of field theory can be used to do perturbative expansions. In particular, we perform a one-loop expansion to show marginal modes in the mean field theory are stabilized by finite-size fluctuations. This problem of marginal modes had been a puzzle in the field for a number of years.

Thursday, March 22, 2007

House of Cards

The US Comptroller General, David Walker, is currently on a "Fiscal Wake-up Tour" to alert the population about the impending US fiscal crisis. I encourage everyone, including non-Americans, to go to one of these events or at least to the Government Accountability Office website at www.gao.gov. In essence, current US fiscal policy is unsustainable. Promised obligations such as Medicare, Medicaid and Social Security as well as servicing the interest of the debt will be 40% of GDP by 2040. Currently, revenue coming mostly form taxes is less than 20% GDP. Hence, either revenues must increase or spending must be cut. Whatever choice is made could lead to dire consequences.

Raising taxes is one solution but that could have adverse effects on the economy. I think the US could sustain some tax increases, especially on the rich, without bad effects but not enough to cover the deficit spending. Much of the current economy is geared towards providing services and goods to the well heeled. If disposable income starts to go down the first purchases to go will be luxury items like yachts, expensive restaurants, ski vacations, and so forth. People in these professions and industries will then lose their jobs putting more strain on social services. The very rich will be insulated from tax increases but the upper middle class, from which most of the tax revenue is extracted, would have to cut down on consumption. Additionally, with the real estate bubble of the past five years, many people are stuck with mortgages that they can barely sustain. Any increase in taxes could push them over the edge. So while taxes can be increased, it really can't be increased too much.

The other option is to cut spending. The retirement age can be raised to reduce the social security obligation. However, social security is actually in pretty good shape compared to medicare and medicaid. Something eventually will be done about these two programs. Hence, medical services and reimbursements to health care professionals will both likely be reduced. Basically, the US could end up being a nation where the poor and elderly will not receive first world medical care. This may be one other reason a complete overhaul of the health care system may be necessary.

Thursday, December 28, 2006

Altruism and Tribalism

There has always been a puzzle in evolutionary biology as to how altruism arose. On first flush, it would seem that sacrificing oneself for another would be detrimental to passing on genes that foster altruism. However, Darwin himself thought that altruism could arise if humans were organized into hostile tribes. From the Descent of Man he notes that the tribes that had more "courageous, sympathetic and faithful members who were always ready to...aid and defend each other... would spread and be victorious over other tribes.'' A recent paper in Science by Samuel Bowles presents a calculation that supports Darwin's hypothesis.

If this hypothesis is correct, then altruism required lethal hostility to flourish and survive. Our capacity for great acts of sacrifice and empathy may go hand in hand with our capacity for brutality and selfishness. It may be why a person can simultaneously be a racist and a humanist. It may also mean that the sectarian violence we are currently witnessing and have witnessed throughout history may be as part of being human as caring for an ailing neighbor or taking a bullet for a friend. Our propensity for kindness may go hand in hand with that of bigotry and violence. It may be that the more homogeneous we become, the less altruistic we may be. Perhaps there may be an important societal role for spectator sports. Cheering for the home team may give us that sense of tribalism and triumph that we need. Maybe, just maybe, hating that cross-town rival makes us kinder in the office and on the roads. What irony that would be.

Wednesday, December 06, 2006

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the "data" to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won't help you understand a problem. I've extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven't put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I'm a proponent of the Hopfield Hypothesis. That is not to say I don't think there will be mechanisms, and important ones at that, yet to be discovered. I'm sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don't have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

Thursday, November 16, 2006

Wealth and Taxes

Milton Freedman, the renowned Chicago economist, died yesterday at the age of 94. His monetary and free-market ideas have strongly influenced US and world economic policy for the past quarter century. However, the most recent US elections suggest that there may be mood shift taking place.

Currently, the US government spends about 25% of the gross domestic product (GDP) and obtains most of it through taxes and not an insignificant portion through borrowing. The current cost of Social Security is about 4.2% of GDP and is projected to rise to 6.3% by 2030. Medicare's annual cost currently represents 2.5% of GDP but is rising very rapidly and is projected to pass Social Security expenditures in 20 years and reach 11% by 2080. You can find all of this information and more at www.socialsecurity.gov. Thus unless the US starts to curtail benefits and spending or increases taxes it is headed for a budget crisis.

The argument by the free-marketeers is that we need to go to some form of personal savings accounts, so instead of contributing to the government's social security system, you save for your retirement yourself. The program would be modelled after the 401(K) tax deferred retirement plan, in which companies instead of providing a defined benefits pension plan would match contributions to the employee's own 401(K) plan. This then transfers the risk from the employer and government onto the individual.

The idea of Milton Friedman and his followers is that the government should reduce both spending and taxes and the result will be higher economic growth and prosperity for all. It is probably true that lowering taxes does help to increase the wealth of those already well off. However, there is a huge dissipative drag on wealth creation and unless you are above some threshold, extra income probably just goes into expenses or pays down some debt.

For most of the population, the largest expense is housing. The value of a house and rent is mostly just market value, so if the market is tight then any increase in income probably just gets manifested in higher real estate prices. There is an argument that one of the consequences of women entering the job force was that houses became more expensive. While everyone seems to think that real estate is a great investment (and maybe it would be if you bought rental property), you cannot realize any financial gain until you sell and unless you plan on downsizing or moving someplace where real estate value is lower, you won't see the returns as extra income for wealth generation.

The two other major and rapidly growing expenses for the average person are healthcare and college tuition. There is lots of talk about how to reduce costs but I think the increase in cost is real. Thirty years ago, there were limited medical tests and treatments. There were no MRI's, PET scans, costly medications especially for chronic conditions and so forth. For those who have access to good health care, the increased cost is probably worth it. Likewise, in the past universities needed little more than books, blackboards and chalk. Now there are computers, wireless internet, shuttle buses to drive students home, extra security for dormitories, 24 hour gyms, and so forth.

The commonality between healthcare and education is that they are necessarily collectively run institutions. The choice is whether to run these privately or publicly. If the choice is to go private, then public funding can be reduced and the cost savings can be returned to the tax payers who must then pay for these services themselves. However, the likely result is that you only get the service you can afford. A tax cut leading to an increase of 10% or 20% in income makes a huge difference if you have a large income but very little if you don't.

If we decide to fund these and other services publicly then we'll need to raise taxes. The problem is that there is a ratchet effect. With the real estate bubble of the past five years, half the population has bought houses they can barely afford and the other half has cashed out the increased value of their house in home equity loans and spent it. The result is that even a small increase in taxes could hurt a lot of families. So if there is a tax increase, the only viable way of doing so is to only tax those that can afford it.

Tuesday, October 31, 2006

War on Obesity

There is a humourous and somewhat sad article in the New York Times last Sunday on the stigmatization of the obese. The article points out a recent research article that calculates that because of American's increasing girth, a billion extra gallons of gasoline (petrol for you Europeans) are burned each year. That means an extra 3.8 million tons of carbon dioxide. So, yes, obesity is now linked to climate change.

There is a lot of talk these days about the obesity epidemic and what to do about it. Many people still believe it is a lifestyle choice. The molecular biologists in the field believe that it is a genetic problem and can only be solved pharmaceutically. Not surprisingly, those most vocal about the magic pill fix also seem to have the most patents and biotech ventures on the side. While both of these points of view are probably true in some sense, they both kind of miss the point. I think that the main reason people are gaining weight is that for our current environment, it is the natural thing to do.

We live in a world where food is extremely cheap and plentiful and exercise is optional. The most logical thing to do it seems is to gain weight and plenty of it. The health consequences of this extra fat will likely not affect most people for many years. Although the incidence of insulin resistance and diabetes is increasing, it is still not clear if moderate weight gain is really all that bad. To quote Katherine Flegal of the Centers for disease Control and Prevention from the Times article: "Yes, obesity is to blame for all the evils of modern life, except somehow, weirdly, it is not killing people enough. In fact that's why there are all these fat people around. They just won't die.”

So what should we do about it? After, three years in this field, I've come to the conclusion that there really isn't much we can do about it on the individual level. Our metabolic systems are so geared to acquiring calories that I believe any pharmaceutical option will likely not be effective in the long run and/or have many side effects. From studies our lab has done on food records, it is quite clear that people generally have no idea how much they eat. I doubt people can will themselves to lose weight. I think the only thing that would work is a wholesale change of our society that would increase the cost or reduce the availability of food and motorized transportation. This is definitely not going to happen by choice or design. So barring a great depression or massive crop failure (which could happen), I think we're just going to have to live with all the extra weight.