Saturday, August 16, 2008

Realistic versus abstract neural modeling

There is a very interesting discourse running on the comp-neuro email list. I've only caught the past week but it seems to be a debate between the benefits of "abstract" versus biological "realistic" models. (Let me caveat that everything here is my interpretation of the two points of view). Jim Bower, who is a strong proponent of realistic modeling, argues that abstract models (an example is a network of point neurons) add biases that lead us astray. He thinks that only through realistic modeling can we set down all the necessary constraints to discover how the system works. In a side remark, he also said that he thought the most important problem to understand is what information a given neuron transmits to another and the rest is just clean up. Bower believes that biology is pre-Copernican and that abstract modeling is akin to Ptolemy adding epicycles to explain planetary motion and realistic modeling is closer to the spirit of Kepler and Newton.

I don't want to debate the history and philosophy of science here but I do want to make some remarks about these two approaches. There are actual several dichotomies at work. One of the things it seems that Bower believes is that a simulation of the brain is not the same as the brain. This is in line with John Searle's argument that you have to include all the details to get it right. In this point of view, there is no description of the brain that is smaller than the brain. I'll call this viewpoint Kolmogorov Complexity Complete (a term I just made up right now). On the other hand, Bower seems to be a strict reductionist in that he does believe that understanding how the parts work will entirely explain the whole, a view that Stuart Kauffman argued vehemently against in his SIAM lecture and new book Reinventing the Sacred. Finally, in an exchange between Bower and Randy O'Reilly, who is a computational cognitive scientist and connectionist, Bower rails against David Marr and the top down approach to understanding the brain. Marr gave an abstract theory of how the cerebellum worked in the late sixties and Bower feels that this has been leading the field astray for forty years.

I find this debate interesting and amusing on several fronts. When I was at Pitt, I remember that Bard Ermentrout used to complain about connectionism because he thought it was too abstract and top down whereas using Hodgkin-Huxley-like models for spiking neurons with biologically faithful synaptic dynamics was the bottom up approach. At the same time, I think Bard (and I use Bard to represent the set of mathematical neuroscientists that mostly focus on the dynamics of interacting spiking neurons; a group to which I belong) was skeptical that the fine detailed realistic modeling of single neurons that Bower was attempting would enlighten us on matters of how the brain worked at the multi-neuron scale. One man's bottom up is another man's top down!

I am now much more agnostic about modeling approaches. My current view is that there are effective theories at all scales and that depending on the question being asked there is a level of detail and class of models that are more useful to addressing that question. In my current research program, I'm trying to make the effective theory approach more systematic. So if you are interested in how a single synaptic event will influence the firing of a Purkinje cell then you would want to construct a multi-compartmental model of that cell that respected the spatial structure. On the other hand if you are interested in understanding how a million neurons can synchronize, then perhaps you would want to use point neurons.

One of the things that I do believe is that complexity at one scale may make things simpler at higher scales. I'll give two examples. Suppose a neuron wanted to do coincidence detection of its inputs, e.g. it would collect inputs and fire if the inputs arrived at the same time. Now for a spatially extended neuron, inputs arriving at different locations on the dendritic tree could take vastly different amounts of time to arrive at the soma where spiking is initiated. Hence simultaneity at the soma is not simultaneity of arrival. It thus seemed that coincidence detection was a hard problem for a neuron to do. Then it was discovered that dendrites have active ion channels so that signals are not just passively propagated, which is slow, but actively propagated quickly. In addition, the farther away you are the faster you go so that no matter where a synaptic event occurs, it takes about the same amount of time to reach the soma. The dendritic complexity turns a spatially extended neuron into a point neuron! Thus, if you just focused on understanding signal propagation in the dendrites, your model would be complicated but if you only cared about coincidence detection, your model could be simple. Another example is in how inputs affect neural firing. For a given amount of injected current a neuron will fire at a given frequency giving what is known as an F-I curve. Usually in slice preparations, the F-I curve of a neuron will be some nonlinear function. However, in these situations not all neuromodulators are present so some of the slower adaptive currents are not active. When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear. Again, complexity at one level makes it more simple at the next level.

Ultimately, this "Bower" versus "Bard" debate can never be settled because the priors (to use a Bayesian term) of the two are so different. Bower believes that the brain is Kolmogorov complexity complete (KCC) and Bard doesn't. In fact, I think that Bard believes that higher level behavior of networks of many neurons may be simpler to understand than sets of just a few neurons. That is why Bower is first trying to figure out how a single neuron works whereas Bard is more interested in explaining a high level cognitive phenomenon like hallucinations in terms of pattern formation in an integro-differential system of equations (i.e. Wilson-Cowan equations). I think most neuroscientists believe that there is a description of the brain (or some aspect of the brain) that is smaller than the brain itself. On the other hand, there seems to be a growing movement towards more realistic characterization and modeling of the brain at the genetic and neural circuit levels (in addition to the neuron level) as evidenced by the work at Janelia Farm and EPFL Lausanne, of which I'll blog about in the future.

6 comments:

Anonymous said...

"When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear."

Interesting - do you know the citations for this?

Carson C. Chow said...

Hi,

I'm sorry to say that I don't remember. It might be one with Boris Gutkin. Bard showed it to me on his computer in his office one day. I'll ask him and get back to you. Or you can ask him directly.

cheers,
Carson

Carson C. Chow said...

The reference is Ermentrout-B, Linearization of F-I curves by adaptation. Neural-Comput. 1998 Oct 1; 10(7): 1721-9

Anonymous said...

Excellent, although I can not read through the source code, I have walked along the path myself, from Hebbian to others and then back to Hebbian. Once upon a time, I was confused at whether I should use a complex model or a simplified model. My goal now is to find out the key functions which make our brain work. So, good luck to you foreigners' study.

milf said...

1。那混合物是更缓慢的 ... 但是 Lexus 的即将到来混合版本 ' 将是比气体气体更快的唯一的版本如好地有多马力。不要自夸速度,但是我被吸引轮流开送行为 90,是警察给我一次休息。
2。那不是很
... 只是通过在城市乘公交车往返我储蓄过来 $ 5000/yr 与我的以前的汽车,吉普车切诺基相比。超过 5 年,会是 $ 更不用说会进一步增强我的储蓄的最近的比率远足的 20K。这样除非你是在你的父母的地产上吸的一个浪费的儿子,你的声明是一束公牛。

milf said...

3. 45 (90 r/t)
45mpg 天是 2 我的车上> 8 加>>比。那每天是 6 >仑的一笔>蓄, 120 月, 1440 每年者 5040 (根 3.5 元/) ... 加上它发表 1/10th CO2。多愚蠢是它不要骑一个,去算进今天和年龄。
4.缺少了解 ... 是真的,实际上我个人这样那样喜欢它我可能享受所有鼓励;税,合伙用车,免费停车米, prius 业主之间的秘密的信号,等等;这样自私地说那我真地在那里在享受在所有气体汽车业主上的所有权那没有一个想法多少我这辆汽车有的嬉戏。我 junked 我的 SL,郊区对我的 Prius ... 你应该也。