Saturday, August 16, 2008

Realistic versus abstract neural modeling

There is a very interesting discourse running on the comp-neuro email list. I've only caught the past week but it seems to be a debate between the benefits of "abstract" versus biological "realistic" models. (Let me caveat that everything here is my interpretation of the two points of view). Jim Bower, who is a strong proponent of realistic modeling, argues that abstract models (an example is a network of point neurons) add biases that lead us astray. He thinks that only through realistic modeling can we set down all the necessary constraints to discover how the system works. In a side remark, he also said that he thought the most important problem to understand is what information a given neuron transmits to another and the rest is just clean up. Bower believes that biology is pre-Copernican and that abstract modeling is akin to Ptolemy adding epicycles to explain planetary motion and realistic modeling is closer to the spirit of Kepler and Newton.

I don't want to debate the history and philosophy of science here but I do want to make some remarks about these two approaches. There are actual several dichotomies at work. One of the things it seems that Bower believes is that a simulation of the brain is not the same as the brain. This is in line with John Searle's argument that you have to include all the details to get it right. In this point of view, there is no description of the brain that is smaller than the brain. I'll call this viewpoint Kolmogorov Complexity Complete (a term I just made up right now). On the other hand, Bower seems to be a strict reductionist in that he does believe that understanding how the parts work will entirely explain the whole, a view that Stuart Kauffman argued vehemently against in his SIAM lecture and new book Reinventing the Sacred. Finally, in an exchange between Bower and Randy O'Reilly, who is a computational cognitive scientist and connectionist, Bower rails against David Marr and the top down approach to understanding the brain. Marr gave an abstract theory of how the cerebellum worked in the late sixties and Bower feels that this has been leading the field astray for forty years.

I find this debate interesting and amusing on several fronts. When I was at Pitt, I remember that Bard Ermentrout used to complain about connectionism because he thought it was too abstract and top down whereas using Hodgkin-Huxley-like models for spiking neurons with biologically faithful synaptic dynamics was the bottom up approach. At the same time, I think Bard (and I use Bard to represent the set of mathematical neuroscientists that mostly focus on the dynamics of interacting spiking neurons; a group to which I belong) was skeptical that the fine detailed realistic modeling of single neurons that Bower was attempting would enlighten us on matters of how the brain worked at the multi-neuron scale. One man's bottom up is another man's top down!

I am now much more agnostic about modeling approaches. My current view is that there are effective theories at all scales and that depending on the question being asked there is a level of detail and class of models that are more useful to addressing that question. In my current research program, I'm trying to make the effective theory approach more systematic. So if you are interested in how a single synaptic event will influence the firing of a Purkinje cell then you would want to construct a multi-compartmental model of that cell that respected the spatial structure. On the other hand if you are interested in understanding how a million neurons can synchronize, then perhaps you would want to use point neurons.

One of the things that I do believe is that complexity at one scale may make things simpler at higher scales. I'll give two examples. Suppose a neuron wanted to do coincidence detection of its inputs, e.g. it would collect inputs and fire if the inputs arrived at the same time. Now for a spatially extended neuron, inputs arriving at different locations on the dendritic tree could take vastly different amounts of time to arrive at the soma where spiking is initiated. Hence simultaneity at the soma is not simultaneity of arrival. It thus seemed that coincidence detection was a hard problem for a neuron to do. Then it was discovered that dendrites have active ion channels so that signals are not just passively propagated, which is slow, but actively propagated quickly. In addition, the farther away you are the faster you go so that no matter where a synaptic event occurs, it takes about the same amount of time to reach the soma. The dendritic complexity turns a spatially extended neuron into a point neuron! Thus, if you just focused on understanding signal propagation in the dendrites, your model would be complicated but if you only cared about coincidence detection, your model could be simple. Another example is in how inputs affect neural firing. For a given amount of injected current a neuron will fire at a given frequency giving what is known as an F-I curve. Usually in slice preparations, the F-I curve of a neuron will be some nonlinear function. However, in these situations not all neuromodulators are present so some of the slower adaptive currents are not active. When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear. Again, complexity at one level makes it more simple at the next level.

Ultimately, this "Bower" versus "Bard" debate can never be settled because the priors (to use a Bayesian term) of the two are so different. Bower believes that the brain is Kolmogorov complexity complete (KCC) and Bard doesn't. In fact, I think that Bard believes that higher level behavior of networks of many neurons may be simpler to understand than sets of just a few neurons. That is why Bower is first trying to figure out how a single neuron works whereas Bard is more interested in explaining a high level cognitive phenomenon like hallucinations in terms of pattern formation in an integro-differential system of equations (i.e. Wilson-Cowan equations). I think most neuroscientists believe that there is a description of the brain (or some aspect of the brain) that is smaller than the brain itself. On the other hand, there seems to be a growing movement towards more realistic characterization and modeling of the brain at the genetic and neural circuit levels (in addition to the neuron level) as evidenced by the work at Janelia Farm and EPFL Lausanne, of which I'll blog about in the future.

4 comments:

Anonymous said...

"When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear."

Interesting - do you know the citations for this?

Carson C. Chow said...

Hi,

I'm sorry to say that I don't remember. It might be one with Boris Gutkin. Bard showed it to me on his computer in his office one day. I'll ask him and get back to you. Or you can ask him directly.

cheers,
Carson

Carson C. Chow said...

The reference is Ermentrout-B, Linearization of F-I curves by adaptation. Neural-Comput. 1998 Oct 1; 10(7): 1721-9

Anonymous said...

Excellent, although I can not read through the source code, I have walked along the path myself, from Hebbian to others and then back to Hebbian. Once upon a time, I was confused at whether I should use a complex model or a simplified model. My goal now is to find out the key functions which make our brain work. So, good luck to you foreigners' study.