I think that sometimes philosophy is important and this may be true for neuroscience right now. I don't mean ivory tower, "what is life?" type philosophy (although that is important too) but trying to pin down what it would mean to say "we understand the brain." When would we know that the game is won? I think this is important for neuroscience now to help to guide research. What should we be doing?
So what does it mean to understand something? I would say there are two aspects. One is predictive power, which would mean that we would be able to know what drugs or therapies would be useful to cure a brain disorder. The second aspect is more difficult to pin down but would basically mean incorporating something seamlessly into your worldview. The simplest example I can give is a mathematical theorem. Predictive understanding would correspond to the ability to follow all the steps of the proof of the theorem and use the theorem to prove new theorems. Incorporative understanding would be the ability to summarize the proof in a way that relates it in a highly compressed form to things you already know. For example, we can understand bifurcations of complicated dynamical systems by reducing them to the behavior of solutions of simple polynomial equations.
Sometimes the two views can clash. Consider the proof of the Kepler Conjecture for sphere packing by my friend and former colleague Tom Hales. The theorem is difficult because there are an infinite number of ways to pack spheres in 3 dimensions. Hales made this manageable by showing that this could be reduced to solving a finite (albeit large) optimization problem. He then proceeded to solve the finite problem computationally. To some people, the proof is a done deal. The trick was to reduce it to a finite problem, after that it is just details. Even if you don't believe Hales's computation you could always repeat it. Others would say, it is not done until you have a complete pen and pencil proof. To me, I think the proof is understandable because Hales was able to reduce it to an algorithm. However this is not a view that everyone shares.
Now we come back to the brain. What would you consider understanding to entail? I'm not sure that we, namely people working in the field today, will ever have that satisfying incorporating understanding of the brain because we don't have anything in our current worldview that could encapsulate that understanding. We will never be able to say, "Oh right, I understand, the brain is like X." In that sense, it is like quantum mechanics (QM). This is a theory that is highly successful in the predictive sense. As a predictive theory, it is quite simple. There are just a few rules to apply and much of our modern technology like lasers and electronics rely on it. However, no one who has ever thought about it would claim any understanding of QM in the incorporation sense. The Copenhagen interpretation is basically a "Don't ask, don't tell" policy for the theory.
In this sense, trying to understand any complex system, no matter how unrelated it is to the brain, could help in the long run to provide a foundation for an incorporating understanding of the brain. That is not to say that I believe there are laws of complex system similar to classical and quantum mechanics. My own view is that there are no laws in complex systems such as the global climate, economics or the brain; there are just effective theories that sort of work in limited circumstances. However, it is by slowly creating effective theories and models that we will form a new worldview of what it means to understand complex systems like the brain. In the meantime, we should continue to try to build a predictive understanding so that we can cure diseases and treat disorders.
Sunday, August 31, 2008
Thursday, August 21, 2008
Materialism and meaning
Let me first say that I am a die hard materialist in that I do believe that there is nothing beyond the physical world. I also believe that physics and hence the physical world is computable in that it can be simulated on a computer. However, helped along by Stuart Kauffman's new book Reinventing the Sacred, I have been gradually edging towards accepting that even in a purely materialistic world there is some amount of arbitrariness in our perception of reality. Kauffman argues that this arbitrariness is not "mathematizable". I will argue here that the question can be formulated mathematically and can be shown to be undecidable or at best intractable. Kauffman's thesis is that we should take advantage of this arbitrariness and make it the foundation of a new concept of the sacred.
The problem arises from how we assign meaning to things in the world. Philosophers like Wittgenstein and Saul Kripke have thought very deeply on this topic and I'll just barely scratch the surface here. The simple question to me is what do we consider to be real. I look outside my window and I see some people walking. To me the people are certainly real but is "walking" real as well? What exactly is walking? If you write a simple program that makes dots move around on a computer screen and show it to someone then depending on what the dots are doing, they will say the dots are walking, running, crawling and so forth. These verbs correspond to relationships between things rather than things themselves. Are verbs and relationships real then? They are certainly necessary for our lives. It would be hard to communicate with someone if you didn't use any verbs. I think they are necessary for an animal to survive in the world as well. A rabbit needs to classify if a wolf is running or walking to respond appropriately.
Now, once we accept that we can ascribe some reality or at least utility to relationships then this can lead to an embarrassment of riches. Suppose we live in a world with N objects that you care about. This can be at any level you want. The number of ways to relate objects in a set is the number of subsets you can form out of those objects. This is called the power set and has cardinality (size) 2^N. But it can get bigger than that. We can also build arbitrarily complex arrangements of things by using the objects more than once. For example, even if you only saw a single bird, you could still invent the term flock to describe a collection of birds. Another way of saying this is that given a finite set of things, there are an infinite number of ways to combine them. This then gives us a countable infinity of items. Now you can take the power set of that set and end up with an uncountable number of items and you can keep on going if you choose. (Cantor's great achievement was to show that the power set of a countable set is uncountable and the power set of an uncountable set is even bigger and so forth). However, we can probably only deal with a finite number of items or at most a countable list (if we are computable ). This finite or countable list encapsulates your perception of reality and if you believe this argument then the probability of obtaining our particular list is basically zero. In fact, given that the set of all possible lists is uncountable, this implies that not all lists can even be computed. Our perception of reality could be undecidable. To me this implies an arbitrariness in how we interact with the physical world which I call our prior. Kauffman calls this the sacred.
Now you could argue that the laws of the material world will lead us to a natural choice of items on our list. However, if we could rerun the universe with a slightly different initial condition would the items on the list be invariant? I think arbitrarily small perturbations will lead to different lists. An argument supporting this idea is that even among different world cultures we have slightly different lists. There are concepts in some languages that are not easily expressible in others. Hence, even if you think the list is imposed by the underlying laws of the physical world, in order to derive the list you would need to do a complete simulation of the universe making this task intractable.
This also makes me have to back track on my criticism of Montague's assertion that psychology can affect how we do physics. While I still believe that we have the capability to compute anything the universe can throw at us, our interpretation of what we see and do can depend on our priors.
The problem arises from how we assign meaning to things in the world. Philosophers like Wittgenstein and Saul Kripke have thought very deeply on this topic and I'll just barely scratch the surface here. The simple question to me is what do we consider to be real. I look outside my window and I see some people walking. To me the people are certainly real but is "walking" real as well? What exactly is walking? If you write a simple program that makes dots move around on a computer screen and show it to someone then depending on what the dots are doing, they will say the dots are walking, running, crawling and so forth. These verbs correspond to relationships between things rather than things themselves. Are verbs and relationships real then? They are certainly necessary for our lives. It would be hard to communicate with someone if you didn't use any verbs. I think they are necessary for an animal to survive in the world as well. A rabbit needs to classify if a wolf is running or walking to respond appropriately.
Now, once we accept that we can ascribe some reality or at least utility to relationships then this can lead to an embarrassment of riches. Suppose we live in a world with N objects that you care about. This can be at any level you want. The number of ways to relate objects in a set is the number of subsets you can form out of those objects. This is called the power set and has cardinality (size) 2^N. But it can get bigger than that. We can also build arbitrarily complex arrangements of things by using the objects more than once. For example, even if you only saw a single bird, you could still invent the term flock to describe a collection of birds. Another way of saying this is that given a finite set of things, there are an infinite number of ways to combine them. This then gives us a countable infinity of items. Now you can take the power set of that set and end up with an uncountable number of items and you can keep on going if you choose. (Cantor's great achievement was to show that the power set of a countable set is uncountable and the power set of an uncountable set is even bigger and so forth). However, we can probably only deal with a finite number of items or at most a countable list (if we are computable ). This finite or countable list encapsulates your perception of reality and if you believe this argument then the probability of obtaining our particular list is basically zero. In fact, given that the set of all possible lists is uncountable, this implies that not all lists can even be computed. Our perception of reality could be undecidable. To me this implies an arbitrariness in how we interact with the physical world which I call our prior. Kauffman calls this the sacred.
Now you could argue that the laws of the material world will lead us to a natural choice of items on our list. However, if we could rerun the universe with a slightly different initial condition would the items on the list be invariant? I think arbitrarily small perturbations will lead to different lists. An argument supporting this idea is that even among different world cultures we have slightly different lists. There are concepts in some languages that are not easily expressible in others. Hence, even if you think the list is imposed by the underlying laws of the physical world, in order to derive the list you would need to do a complete simulation of the universe making this task intractable.
This also makes me have to back track on my criticism of Montague's assertion that psychology can affect how we do physics. While I still believe that we have the capability to compute anything the universe can throw at us, our interpretation of what we see and do can depend on our priors.
Saturday, August 16, 2008
Realistic versus abstract neural modeling
There is a very interesting discourse running on the comp-neuro email list. I've only caught the past week but it seems to be a debate between the benefits of "abstract" versus biological "realistic" models. (Let me caveat that everything here is my interpretation of the two points of view). Jim Bower, who is a strong proponent of realistic modeling, argues that abstract models (an example is a network of point neurons) add biases that lead us astray. He thinks that only through realistic modeling can we set down all the necessary constraints to discover how the system works. In a side remark, he also said that he thought the most important problem to understand is what information a given neuron transmits to another and the rest is just clean up. Bower believes that biology is pre-Copernican and that abstract modeling is akin to Ptolemy adding epicycles to explain planetary motion and realistic modeling is closer to the spirit of Kepler and Newton.
I don't want to debate the history and philosophy of science here but I do want to make some remarks about these two approaches. There are actual several dichotomies at work. One of the things it seems that Bower believes is that a simulation of the brain is not the same as the brain. This is in line with John Searle's argument that you have to include all the details to get it right. In this point of view, there is no description of the brain that is smaller than the brain. I'll call this viewpoint Kolmogorov Complexity Complete (a term I just made up right now). On the other hand, Bower seems to be a strict reductionist in that he does believe that understanding how the parts work will entirely explain the whole, a view that Stuart Kauffman argued vehemently against in his SIAM lecture and new book Reinventing the Sacred. Finally, in an exchange between Bower and Randy O'Reilly, who is a computational cognitive scientist and connectionist, Bower rails against David Marr and the top down approach to understanding the brain. Marr gave an abstract theory of how the cerebellum worked in the late sixties and Bower feels that this has been leading the field astray for forty years.
I find this debate interesting and amusing on several fronts. When I was at Pitt, I remember that Bard Ermentrout used to complain about connectionism because he thought it was too abstract and top down whereas using Hodgkin-Huxley-like models for spiking neurons with biologically faithful synaptic dynamics was the bottom up approach. At the same time, I think Bard (and I use Bard to represent the set of mathematical neuroscientists that mostly focus on the dynamics of interacting spiking neurons; a group to which I belong) was skeptical that the fine detailed realistic modeling of single neurons that Bower was attempting would enlighten us on matters of how the brain worked at the multi-neuron scale. One man's bottom up is another man's top down!
I am now much more agnostic about modeling approaches. My current view is that there are effective theories at all scales and that depending on the question being asked there is a level of detail and class of models that are more useful to addressing that question. In my current research program, I'm trying to make the effective theory approach more systematic. So if you are interested in how a single synaptic event will influence the firing of a Purkinje cell then you would want to construct a multi-compartmental model of that cell that respected the spatial structure. On the other hand if you are interested in understanding how a million neurons can synchronize, then perhaps you would want to use point neurons.
One of the things that I do believe is that complexity at one scale may make things simpler at higher scales. I'll give two examples. Suppose a neuron wanted to do coincidence detection of its inputs, e.g. it would collect inputs and fire if the inputs arrived at the same time. Now for a spatially extended neuron, inputs arriving at different locations on the dendritic tree could take vastly different amounts of time to arrive at the soma where spiking is initiated. Hence simultaneity at the soma is not simultaneity of arrival. It thus seemed that coincidence detection was a hard problem for a neuron to do. Then it was discovered that dendrites have active ion channels so that signals are not just passively propagated, which is slow, but actively propagated quickly. In addition, the farther away you are the faster you go so that no matter where a synaptic event occurs, it takes about the same amount of time to reach the soma. The dendritic complexity turns a spatially extended neuron into a point neuron! Thus, if you just focused on understanding signal propagation in the dendrites, your model would be complicated but if you only cared about coincidence detection, your model could be simple. Another example is in how inputs affect neural firing. For a given amount of injected current a neuron will fire at a given frequency giving what is known as an F-I curve. Usually in slice preparations, the F-I curve of a neuron will be some nonlinear function. However, in these situations not all neuromodulators are present so some of the slower adaptive currents are not active. When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear. Again, complexity at one level makes it more simple at the next level.
Ultimately, this "Bower" versus "Bard" debate can never be settled because the priors (to use a Bayesian term) of the two are so different. Bower believes that the brain is Kolmogorov complexity complete (KCC) and Bard doesn't. In fact, I think that Bard believes that higher level behavior of networks of many neurons may be simpler to understand than sets of just a few neurons. That is why Bower is first trying to figure out how a single neuron works whereas Bard is more interested in explaining a high level cognitive phenomenon like hallucinations in terms of pattern formation in an integro-differential system of equations (i.e. Wilson-Cowan equations). I think most neuroscientists believe that there is a description of the brain (or some aspect of the brain) that is smaller than the brain itself. On the other hand, there seems to be a growing movement towards more realistic characterization and modeling of the brain at the genetic and neural circuit levels (in addition to the neuron level) as evidenced by the work at Janelia Farm and EPFL Lausanne, of which I'll blog about in the future.
I don't want to debate the history and philosophy of science here but I do want to make some remarks about these two approaches. There are actual several dichotomies at work. One of the things it seems that Bower believes is that a simulation of the brain is not the same as the brain. This is in line with John Searle's argument that you have to include all the details to get it right. In this point of view, there is no description of the brain that is smaller than the brain. I'll call this viewpoint Kolmogorov Complexity Complete (a term I just made up right now). On the other hand, Bower seems to be a strict reductionist in that he does believe that understanding how the parts work will entirely explain the whole, a view that Stuart Kauffman argued vehemently against in his SIAM lecture and new book Reinventing the Sacred. Finally, in an exchange between Bower and Randy O'Reilly, who is a computational cognitive scientist and connectionist, Bower rails against David Marr and the top down approach to understanding the brain. Marr gave an abstract theory of how the cerebellum worked in the late sixties and Bower feels that this has been leading the field astray for forty years.
I find this debate interesting and amusing on several fronts. When I was at Pitt, I remember that Bard Ermentrout used to complain about connectionism because he thought it was too abstract and top down whereas using Hodgkin-Huxley-like models for spiking neurons with biologically faithful synaptic dynamics was the bottom up approach. At the same time, I think Bard (and I use Bard to represent the set of mathematical neuroscientists that mostly focus on the dynamics of interacting spiking neurons; a group to which I belong) was skeptical that the fine detailed realistic modeling of single neurons that Bower was attempting would enlighten us on matters of how the brain worked at the multi-neuron scale. One man's bottom up is another man's top down!
I am now much more agnostic about modeling approaches. My current view is that there are effective theories at all scales and that depending on the question being asked there is a level of detail and class of models that are more useful to addressing that question. In my current research program, I'm trying to make the effective theory approach more systematic. So if you are interested in how a single synaptic event will influence the firing of a Purkinje cell then you would want to construct a multi-compartmental model of that cell that respected the spatial structure. On the other hand if you are interested in understanding how a million neurons can synchronize, then perhaps you would want to use point neurons.
One of the things that I do believe is that complexity at one scale may make things simpler at higher scales. I'll give two examples. Suppose a neuron wanted to do coincidence detection of its inputs, e.g. it would collect inputs and fire if the inputs arrived at the same time. Now for a spatially extended neuron, inputs arriving at different locations on the dendritic tree could take vastly different amounts of time to arrive at the soma where spiking is initiated. Hence simultaneity at the soma is not simultaneity of arrival. It thus seemed that coincidence detection was a hard problem for a neuron to do. Then it was discovered that dendrites have active ion channels so that signals are not just passively propagated, which is slow, but actively propagated quickly. In addition, the farther away you are the faster you go so that no matter where a synaptic event occurs, it takes about the same amount of time to reach the soma. The dendritic complexity turns a spatially extended neuron into a point neuron! Thus, if you just focused on understanding signal propagation in the dendrites, your model would be complicated but if you only cared about coincidence detection, your model could be simple. Another example is in how inputs affect neural firing. For a given amount of injected current a neuron will fire at a given frequency giving what is known as an F-I curve. Usually in slice preparations, the F-I curve of a neuron will be some nonlinear function. However, in these situations not all neuromodulators are present so some of the slower adaptive currents are not active. When everything is restored, it was found (both theoretically by Bard and experimentally) that the F-I curve actually becomes more linear. Again, complexity at one level makes it more simple at the next level.
Ultimately, this "Bower" versus "Bard" debate can never be settled because the priors (to use a Bayesian term) of the two are so different. Bower believes that the brain is Kolmogorov complexity complete (KCC) and Bard doesn't. In fact, I think that Bard believes that higher level behavior of networks of many neurons may be simpler to understand than sets of just a few neurons. That is why Bower is first trying to figure out how a single neuron works whereas Bard is more interested in explaining a high level cognitive phenomenon like hallucinations in terms of pattern formation in an integro-differential system of equations (i.e. Wilson-Cowan equations). I think most neuroscientists believe that there is a description of the brain (or some aspect of the brain) that is smaller than the brain itself. On the other hand, there seems to be a growing movement towards more realistic characterization and modeling of the brain at the genetic and neural circuit levels (in addition to the neuron level) as evidenced by the work at Janelia Farm and EPFL Lausanne, of which I'll blog about in the future.
Friday, August 15, 2008
New Paper on insulin's effect on free fatty acids
A paper I've been trying to get published for two years will finally appear in the American Journal of Physiology - Regulatory, Integrative, and Comparative Physiology. The goal of this paper was to develop a quantitative model for how insulin suppresses free fatty acid levels in the blood. A little background for those unfamiliar with human metabolism. All of the body's cells burn fuel and for most cells this can be fat, carbohydrate or protein. The brain, however, can only burn glucose, which is a carbohydrate, and ketone bodies, which are made when the body is short of glucose. Why the brain can't burn fat is still a mystery. It is not because fat cannot cross the blood brain barrier as is sometimes claimed. Thus, the body has a reason to regulate glucose levels in the blood. It does this through hormones, the most well known of which is insulin.
Muscle cells cannot uptake glucose unless insulin is present. So when you eat a meal with carbohydrates, insulin is released by the pancreas and your body utilizes the glucose that is present. In between meals, muscle cells mostly burn fat in the form of free fatty acids that are released by fat cells (adipocytes) through a process called lipolysis. The glucose that is circulating is thus saved for the brain. When insulin is released, it also suppresses lipolysis. Basically, insulin flips a switch that causes muscle and other body cells to switch from burning fat to glucose and in addition switches off the fuel supply for fat.
If your pancreas cannot produce enough insulin then your glucose levels will be elevated and this is diabetes mellitus. Fifty years ago, diabetes was usually the result of an auto-immune disorder that destroyed pancreatic beta cells that produce insulin. This is known as Type I diabetes. However, recently the most prevalent form of diabetes, called Type II, arises from a drawn out process attributed to overweight or obesity. In Type II diabetes, people first go through a phase called insulin resistance where more insulin is required for glucose to be taken up by muscle cells. The theory is that after prolonged insulin resistance, the pancrease eventually wears out and this leads to diabetes. Insulin resistance is usually reversible by losing weight.
Thus, a means to measure how insulin resistant or sensitive you are is important. This is usually done through a glucose challenge test, where glucose is either ingested or injected and then the response of the body is measured. I don't want to get into all the methods used to assess insulin sensitivity but one of the methods uses what is known as the minimal model of glucose disposal, which was developed in the late seventies by Richard Bergman, Claudio Cobelli and colleagues. This is a system of 2 ordinary differential equations that model insulin's affect on blood glucose levels. The model is fit to the data and an insulin sensitivity index is one of the parameters. Dave Polidori, who is a research scientist at Johnson and Johnson, claims that this is the most used mathematical model in all of biology. I don't know if that is true but it does have great clinical importance.
The flip side to glucose is the control of free fatty acids (FFAs) in the blood and this aspect has not been as well quantified. Several groups have been trying to develop an analogous minimal model for insulin's action on FFA levels. However, none of these models have been validated or even tested against each other on a single data set. In this paper, we used a data set of 102 subjects and tested 23 different models that included previously proposed models and several new ones. The models have the form of an augmented minimal model with compartments for insulin, glucose and FFA. Using Bayesian model comparison methods and a Markov chain Monte Carlo algorithm, we calculated the Bayes factors for all the models. We found that a class of models distinguished themselves from the rest with one model performing the best. I've been using Bayesian methods quite a bit lately and I'll blog about it sometime in the future. If you're interested in the details of the model, I encourage you to read the paper.
Muscle cells cannot uptake glucose unless insulin is present. So when you eat a meal with carbohydrates, insulin is released by the pancreas and your body utilizes the glucose that is present. In between meals, muscle cells mostly burn fat in the form of free fatty acids that are released by fat cells (adipocytes) through a process called lipolysis. The glucose that is circulating is thus saved for the brain. When insulin is released, it also suppresses lipolysis. Basically, insulin flips a switch that causes muscle and other body cells to switch from burning fat to glucose and in addition switches off the fuel supply for fat.
If your pancreas cannot produce enough insulin then your glucose levels will be elevated and this is diabetes mellitus. Fifty years ago, diabetes was usually the result of an auto-immune disorder that destroyed pancreatic beta cells that produce insulin. This is known as Type I diabetes. However, recently the most prevalent form of diabetes, called Type II, arises from a drawn out process attributed to overweight or obesity. In Type II diabetes, people first go through a phase called insulin resistance where more insulin is required for glucose to be taken up by muscle cells. The theory is that after prolonged insulin resistance, the pancrease eventually wears out and this leads to diabetes. Insulin resistance is usually reversible by losing weight.
Thus, a means to measure how insulin resistant or sensitive you are is important. This is usually done through a glucose challenge test, where glucose is either ingested or injected and then the response of the body is measured. I don't want to get into all the methods used to assess insulin sensitivity but one of the methods uses what is known as the minimal model of glucose disposal, which was developed in the late seventies by Richard Bergman, Claudio Cobelli and colleagues. This is a system of 2 ordinary differential equations that model insulin's affect on blood glucose levels. The model is fit to the data and an insulin sensitivity index is one of the parameters. Dave Polidori, who is a research scientist at Johnson and Johnson, claims that this is the most used mathematical model in all of biology. I don't know if that is true but it does have great clinical importance.
The flip side to glucose is the control of free fatty acids (FFAs) in the blood and this aspect has not been as well quantified. Several groups have been trying to develop an analogous minimal model for insulin's action on FFA levels. However, none of these models have been validated or even tested against each other on a single data set. In this paper, we used a data set of 102 subjects and tested 23 different models that included previously proposed models and several new ones. The models have the form of an augmented minimal model with compartments for insulin, glucose and FFA. Using Bayesian model comparison methods and a Markov chain Monte Carlo algorithm, we calculated the Bayes factors for all the models. We found that a class of models distinguished themselves from the rest with one model performing the best. I've been using Bayesian methods quite a bit lately and I'll blog about it sometime in the future. If you're interested in the details of the model, I encourage you to read the paper.
Saturday, August 09, 2008
SIAM Lifesciences '08
I've just returned from the Society of Industrial and Applied Mathematics (SIAM) Lifesciences meeting in Montreal. I haven't traveled to a meeting since my baby was born so it was nice to catch up with old friends and the field. I thought that all of the plenary talks were excellent and I commend the organizing committee for doing a great job. Particularly interesting was a public lecture given by Stuart Kauffman on his new book "Reinventing the Sacred". That talk was full of many ideas that I've been directly interested in and I'll blog about them soon.
One of the things I took away from this meeting was that the field seems more diverse than when I organized it in 2004. In particular, I thought that the first two renditions of this meeting (2002, 2004) were more like an offshoot of the very well attended SIAM dynamical systems meeting held in Snowbird, Utah on odd numbered years. Now, I think that the participation base is more diverse and in particular there is much more overlap with the systems biology community. One of the unique things about this meeting is that people interested in systems neuroscience and systems biology both attend. These two communities generally don't mix even though some of the problems and methods have similarities and would benefit from interacting. Erik De Shutter wrote a nice article recently in PLoS Computational Biology exploring this topic. I thus particularly enjoyed the fact that there were sessions that included talks on both neural and genetic/biochemical networks. In addition, there were sessions on cardiac dynamics, metabolism, tissue growth, imaging, fluid dynamics, epidemiology and many other areas. Hence, I think that this meeting does play a useful and unique role bringing together mathematicians and modelers from all fields.
I gave a talk on my work on the dynamics of human body weight change. In addition to summarizing my PLoS Computational Biology paper, I also showed that because humans have such a long time constant to achieve energy balance when on a fixed rate of energy intake (i.e. a year or more), we can tolerate a wide amount of fluctuations in our energy intake rate and still have a small variance in our body weight. This answers the "paradox" that nutritionists seem to believe, namely that if a change of as small as 20 kcals/day (a cookie is ~150 kcal) can lead to a weight change of a kilogram then how do we maintain our body weights if we consume over a million kcals a year. Part of their confusion stems from conflating average with standard deviation. Given that we only eat finite amounts of food per day then no matter what you eat in a year you will have some average body weight. The question is why the standard deviation is so small; we generally don't fluctuate by more than a few kilos per year. The answer is simply that with a long time constant, we average over fluctuations. My back of the envelope calculation shows that the coefficient of variation (standard deviation divided by mean) of body weight suppresses by a factor of 15 or more the coefficient of variation in the food intake. This also points to correlations in food intake rate leading to weight gain, as was addressed in my paper with Vipul Periwal (Periwal and Chow, AJP-EM, 291:929 (2006)).
Karen Ong, from my group, also went and presented our work on steroid mediated gene expression. She won the poster competition for undergraduate students. I'll blog about this work in the future. While I was sitting in the sessions on computational neuroscience and gene regulation, I regretted not having more people in my lab attend and present our current ideas on these and other topics.
One of the things I took away from this meeting was that the field seems more diverse than when I organized it in 2004. In particular, I thought that the first two renditions of this meeting (2002, 2004) were more like an offshoot of the very well attended SIAM dynamical systems meeting held in Snowbird, Utah on odd numbered years. Now, I think that the participation base is more diverse and in particular there is much more overlap with the systems biology community. One of the unique things about this meeting is that people interested in systems neuroscience and systems biology both attend. These two communities generally don't mix even though some of the problems and methods have similarities and would benefit from interacting. Erik De Shutter wrote a nice article recently in PLoS Computational Biology exploring this topic. I thus particularly enjoyed the fact that there were sessions that included talks on both neural and genetic/biochemical networks. In addition, there were sessions on cardiac dynamics, metabolism, tissue growth, imaging, fluid dynamics, epidemiology and many other areas. Hence, I think that this meeting does play a useful and unique role bringing together mathematicians and modelers from all fields.
I gave a talk on my work on the dynamics of human body weight change. In addition to summarizing my PLoS Computational Biology paper, I also showed that because humans have such a long time constant to achieve energy balance when on a fixed rate of energy intake (i.e. a year or more), we can tolerate a wide amount of fluctuations in our energy intake rate and still have a small variance in our body weight. This answers the "paradox" that nutritionists seem to believe, namely that if a change of as small as 20 kcals/day (a cookie is ~150 kcal) can lead to a weight change of a kilogram then how do we maintain our body weights if we consume over a million kcals a year. Part of their confusion stems from conflating average with standard deviation. Given that we only eat finite amounts of food per day then no matter what you eat in a year you will have some average body weight. The question is why the standard deviation is so small; we generally don't fluctuate by more than a few kilos per year. The answer is simply that with a long time constant, we average over fluctuations. My back of the envelope calculation shows that the coefficient of variation (standard deviation divided by mean) of body weight suppresses by a factor of 15 or more the coefficient of variation in the food intake. This also points to correlations in food intake rate leading to weight gain, as was addressed in my paper with Vipul Periwal (Periwal and Chow, AJP-EM, 291:929 (2006)).
Karen Ong, from my group, also went and presented our work on steroid mediated gene expression. She won the poster competition for undergraduate students. I'll blog about this work in the future. While I was sitting in the sessions on computational neuroscience and gene regulation, I regretted not having more people in my lab attend and present our current ideas on these and other topics.
Saturday, August 02, 2008
Penrose redux
In 2006, I posted my thoughts on Roger Penrose's argument that human thought must be noncomputable. Penrose's argument follows from the fact that Godel's incompleteness theorem states that there exist true statements in a consistent formal system (e.g. arithmetic with integers) that cannot be proved within that system. The proof basically boils down to showing statements like "this statement cannot be proved" are true but cannot be proved because if they could be proved then there would be an inconsistency with the system. Turing later showed that this was equivalent to saying that there are problems, known as undecidable or uncomputable problems, that a computer could not solve. From these theorems, Penrose draws the conclusion that since we can recognize unprovable statements are true then we must not be a computer.
My original argument refuting Penrose's claim was that we didn't really know what formal system we were using or whether or not it remained fixed so we couldn't know if we were recognizing true statements that we can't prove. However, I now have a simpler argument, which is simply that no human has ever solved an uncomputable problem and hence has not shown they are more than a computer. The fact that they know about uncomputability is not an example. A machine could also have the same knowledge since Godel's and Turing's proofs (as are all proofs) are computable. Another way of staying this is that any proof or thing that can be written down in a finite number of symbols could also be done by a computer.
An example is the fact that you can infer the existence of real numbers using only integers. Thus, even though real numbers are uncountable and thus uncomputable, we can prove lots of properties about then just using integers. The Dedekind cut can be used to prove the completeness of real numbers without resorting to the axiom of choice. Humans and computers can reason about real numbers and physical theories based on real numbers without actually ever having to deal directly with real numbers. To paraphrase, reasoning about uncomputable problems is not the same as solving uncomputable problems. So until a human being can reliably tell me whether or not any Diophantine equation (polynomial equation with integer coefficients) has a solution in integers (i.e. Hilbert's tenth problem) or always know if any program will ever halt (i.e. the halting problem), I'll continue to believe that a computer can do whatever we can do.
My original argument refuting Penrose's claim was that we didn't really know what formal system we were using or whether or not it remained fixed so we couldn't know if we were recognizing true statements that we can't prove. However, I now have a simpler argument, which is simply that no human has ever solved an uncomputable problem and hence has not shown they are more than a computer. The fact that they know about uncomputability is not an example. A machine could also have the same knowledge since Godel's and Turing's proofs (as are all proofs) are computable. Another way of staying this is that any proof or thing that can be written down in a finite number of symbols could also be done by a computer.
An example is the fact that you can infer the existence of real numbers using only integers. Thus, even though real numbers are uncountable and thus uncomputable, we can prove lots of properties about then just using integers. The Dedekind cut can be used to prove the completeness of real numbers without resorting to the axiom of choice. Humans and computers can reason about real numbers and physical theories based on real numbers without actually ever having to deal directly with real numbers. To paraphrase, reasoning about uncomputable problems is not the same as solving uncomputable problems. So until a human being can reliably tell me whether or not any Diophantine equation (polynomial equation with integer coefficients) has a solution in integers (i.e. Hilbert's tenth problem) or always know if any program will ever halt (i.e. the halting problem), I'll continue to believe that a computer can do whatever we can do.
Subscribe to:
Posts (Atom)