Wednesday, December 06, 2006

The Hopfield Hypothesis

In 2000, John Hopfield and Carlos Brody put out an interesting challenge to the neuroscience community. They came up with a neural network, constructed out of simple well known neural elements, that could do a simple speech recognition task. The network was robust to noise and the speed the sentences were spoken. They conducted some numerical experiments on the network and provided the "data" to anyone interested. People were encouraged to submit solutions for how the network worked and Jeff Hawkins of Palm Pilot fame kicked in a small prize for the best answer. The initial challenge with the mock data and the implementation details were published separately in PNAS. Our computational neuroscience journal club at Pitt worked on the problem for a few weeks. We came pretty close to getting the correct answer but missed one crucial element.

Hopfield wanted to present the model as a challenge to serve as an example that sometimes more data won't help you understand a problem. I've extrapolated this thought into the statement that perhaps we already know all the neurophysiology we need to understand the brain but just haven't put the pieces together in the right way yet. I call this the Hopfield Hypothesis. I think many neuroscientists believe that there are still many unknown physiological mechanisms that need to be discovered and so what we need are not more theories but more experiments and data. Even some theorists believe this notion. I personally know one very prominent computational neuroscientist who believes that there may be some mechanism that we have not yet discovered that is essential for understanding the brain.

Currently, I'm a proponent of the Hopfield Hypothesis. That is not to say I don't think there will be mechanisms, and important ones at that, yet to be discovered. I'm sure this is true but I do think that much of how the brain functions could be understood with what we already know, namely that the brain is composed of populations of excitatory and inhibitory neurons with connections that obey synaptic plasticity rules such as long-term potentiation and spike-time dependent plasticity with adaptation mechanisms such as synaptic facilitation, synaptic depression, and spike frequency adaptation that operate on multiple time scales. Thus far, using these mechanisms we can construct models of working memory, synchronous neural firing, perceptual rivalry, decision making, and so forth. However, we still don't have the big picture. My sense is that neural systems are highly scale dependent so as we begin to analyze and simulate larger and more complex networks, we will find new unexpected properties and get closer to figuring out the brain.

5 comments:

steve said...

How about long term memory? Is a biological mechanism understood for content-addressable storage which is stable over years?

Carson Chow said...

Hi,

You make an interesting point. I would answer that long term memory is likely to be due to some stable connection pattern in cortex. The question then is what is the mechanism that can generate such stability. This is still up to debate of course. Lisman feels that positive feedback in the CAMKII pathway is enough to maintain synaptic strength. I don't really have an opinion on what the precise mechanism is but I think we can understand how long term memory could work without knowing the precise biological mechanism.

steve said...

OK, just wondering if there were any new developments... It's quite amazing to me that you can form the memory in a fraction of a second and then have it persist for years (of course there may be two mechanisms, short and longterm and a means of transferrence, i.e., when sleeping or whatever).

The other thing is, and you know this as a "more is different" complex systems guy, is that even if you know the basic mechanisms you don't necessarily "understand" how the whole thing works. If I give you a system of N neurons and you understand the behavior of each individually it doesn't mean you can characterize the whole brain.

In fact, it may be impossible: in cases of irreducible complexity there is no way to predict the behavior of a program without running (or simulating) the program itself! In that case the best you can hope for is some heuristics that work most of the time, but not always.

Carson Chow said...

Hi Steve,

We know that if two neurons fire simultaneously then both the synaptic strength and number of synapses can increase. The fact that neurites can grow in real time is relatively recent.


How we do one shot learning is still a major puzzle. It likely involves prefrontal cortex, hippocampus and temporal cortex but we don't really know how it works.

As to your last point, it works both ways. Microscopic laws do not tell you a priori what the system can do and conversely the emergent behavior does not completely specify the microscopic laws.

I don't think the brain is completely irreducibly complex. I think, it can be broken down at least into smaller modules like attention, 3D perception, motor control and so forth. However, each of these systems may be quite large in themselves. Ultimately, I doubt we'll actually understand the brain per se but we will understand various mechanisms and be able to construct or grow brainlike objects.

Achler said...

This is a great topic.
I believe we are not just missing key elements, but the ones we believe in may be wrong as well.
Moreover, what we believe may be interfering with our ability to explore new mechanisms.
The over-reliance on “learning” models appears to be one of these cases.
“Learning models” introduce a huge parameter space (each neuron has thousands of variables where each can vary independently). For a small enough test space, of course anything can learned and fitted.
However to determine efficient “learning rules” primarily a feedforward structure must be assumed (otherwise the mathematics become intractable). With the amount of top-down feedback found in the brain, this is an unrealistic assumption.
Lastly top-down effects can alter response properties and appear as “learning”.
So why do we automatically assume changes in response properties is learning?