In a recent post, I commented on Reed Montague's proposal that it may be necessary to account for the limits of psychology when developing theories of physics. I disagreed with his thesis because if we accept that physics is computable and the brain is described by physics then any property of physics should be discernible by the brain. However, I should have been more careful in my statements. Given the theorems of Godel and Turing, we must also accept that there may be certain problems or questions that are not decidable or computable. The most famous example is that there is no algorithm to decide if an arbitrary computation will ever stop. In computer science this is known as the halting problem. (Godel's incompleteness theorems are direct consequences of the halting problem, although historically they came first). The implication is quite broad for it also implies that there is no sure fire way of knowing if a given computation will do a particular thing (i.e. print out a particular symbol). This is also why there is no certain way of ever knowing if a person is insane or if a criminal will commit another crime, as I claimed in my post on crime and neuroscience.
Hence, it may be possible that some theories in physics, like the ultimate super-duper theory of everything, may in fact be undecidable. However, this is not just a problem of psychology but also a problem of physics. Some, like British mathematical physicist Roger Penrose, would argue that the brain and physics are actually not computable. Penrose's arguments are outlined in two books - The Emperor's New Mind and Shadows of the Mind. It could be that the brain is not computable but I (and many others for various reasons) don't buy Penrose's argument for why it is not. I posted on this topic previously although I've refined my ideas considerably since that post.
However, even if the brain and physics were not computable there would still be a problem because we can only record and communicate ideas with a finite number of symbols and this is limited by the theorems of Turing and Godel. It could be possible that a single person could solve a problem or understand something that is undecidable but she would not be able to tell anyone else what it is or write about it. The best she could do is to teach someone else how to get into such a mental state to "see" it for themselves. One could argue that this is what religion and spiritual traditions are for. Buddhism in a crude sense is a recipe for attaining nirvana (although one of the precepts is that trying to attain nirvana is a surefire way of not attaining it!). So, it could be possible that throughout history there have been people that have attained a level of, dare I say, enlightenment but there is no way for them to tell us what that means exactly.
Sunday, July 27, 2008
Friday, July 18, 2008
Neural Code
An open question in neuroscience is: what is the neural code? By that it is meant, how is information represented and processed in the brain. I would say that the majority of neuroscientists, especially experimentalists, don't worry too much about this problem and implicitly assume what is called a rate code, which I will describe below. There is then a small but active group of experimentalists and theorists who are keenly interested in this question and there is a yearly conference, usually at a ski resort, devoted to the topic. I would venture to say that within this group - who use tools from statistics, Bayesian analysis, machine learning and information theory to analyze data obtained from in vivo multi-electrode recordings of neural activity in awake or sedated animals given various stimuli - there is a larger amount of skepticism towards a basic rate code than the general neural community.
For the beneficiary of the uninitiated, I will first give a very brief and elementary review of neural signaling. The brain consists of 10^11 or so neurons, which are intricately connected to one another. Each neuron has a body, called the soma, an output cable, called the axon, and input cables, called the dendrites. Axons "connect" to dendrites through synapses. Neurons signal each other with a hybrid electro-chemical scheme. The electrical part involves voltage pulses called action potentials or spikes. The spikes propagate down axons through the movement of ions across the cell membrane. When the spikes reach a synapse, they trigger a release of neurotransmitters, which diffuse across the synaptic cleft, bind to receptors on the receiving end of the synapses and induce either a depolarizing voltage pulse (excitatory signal) or a hyperpolarizing voltage pulse (inhibitory signal). In that way, spikes from a given neuron can either increase or decrease the probability of spikes in a connected neuron.
The neuroscience community is basically all in agreement that neural information is carried by the spikes. So the question of the neural code becomes: how is information coded into spikes? For example, if you look at an apple, something in the spiking pattern of the neurons in the brain is representing the apple. Does this change involve just a single neuron? This is called the grandmother cell code, from the joke that there is a single neuron in the brain that represents your grandmother. Or does it involve a population of neurons, known not surprisingly, as a population code. How did the spiking pattern change? Neurons have some background spiking rate, so do they simply spike faster when they are coding for something, or does the precise spiking pattern matter. If it is just a matter of spiking faster then this is called a rate code, since it is just the spiking rate of the neuron that contains information. If the pattern of the spikes matter then it is called a timing code.
The majority of neuroscientists, especially experimentalists, implicitly assume that the brain uses a population rate code. The main reason they believe this is because in most systems neuroscience experiments, an animal will be given a stimulus, and then neurons in some brain region are recorded to see if any respond to that particular stimulus. To measure the response they often count the number of spikes in some time window, say 500 ms, and see if it exceeds some background level. What seems to be true from almost all of these experiments is that no matter how complicated a stimulus you want to try, a group of neurons can usually be found that respond to that stimulus. So, the code must involve some population of neurons and the spiking rate must increase. What is not known is which and how many neurons are involved and whether or not the timing of the spikes matter.
My sense is that the neural code is a population rate code but the population and time window change and adapt depending on context. Thus understanding the neural code is no simpler than understanding how the brain computes. In molecular biology, deciphering the genetic code ultimately led to understanding the mechanisms behind gene transcription but I think in neuroscience it may be the other way around.
For the beneficiary of the uninitiated, I will first give a very brief and elementary review of neural signaling. The brain consists of 10^11 or so neurons, which are intricately connected to one another. Each neuron has a body, called the soma, an output cable, called the axon, and input cables, called the dendrites. Axons "connect" to dendrites through synapses. Neurons signal each other with a hybrid electro-chemical scheme. The electrical part involves voltage pulses called action potentials or spikes. The spikes propagate down axons through the movement of ions across the cell membrane. When the spikes reach a synapse, they trigger a release of neurotransmitters, which diffuse across the synaptic cleft, bind to receptors on the receiving end of the synapses and induce either a depolarizing voltage pulse (excitatory signal) or a hyperpolarizing voltage pulse (inhibitory signal). In that way, spikes from a given neuron can either increase or decrease the probability of spikes in a connected neuron.
The neuroscience community is basically all in agreement that neural information is carried by the spikes. So the question of the neural code becomes: how is information coded into spikes? For example, if you look at an apple, something in the spiking pattern of the neurons in the brain is representing the apple. Does this change involve just a single neuron? This is called the grandmother cell code, from the joke that there is a single neuron in the brain that represents your grandmother. Or does it involve a population of neurons, known not surprisingly, as a population code. How did the spiking pattern change? Neurons have some background spiking rate, so do they simply spike faster when they are coding for something, or does the precise spiking pattern matter. If it is just a matter of spiking faster then this is called a rate code, since it is just the spiking rate of the neuron that contains information. If the pattern of the spikes matter then it is called a timing code.
The majority of neuroscientists, especially experimentalists, implicitly assume that the brain uses a population rate code. The main reason they believe this is because in most systems neuroscience experiments, an animal will be given a stimulus, and then neurons in some brain region are recorded to see if any respond to that particular stimulus. To measure the response they often count the number of spikes in some time window, say 500 ms, and see if it exceeds some background level. What seems to be true from almost all of these experiments is that no matter how complicated a stimulus you want to try, a group of neurons can usually be found that respond to that stimulus. So, the code must involve some population of neurons and the spiking rate must increase. What is not known is which and how many neurons are involved and whether or not the timing of the spikes matter.
My sense is that the neural code is a population rate code but the population and time window change and adapt depending on context. Thus understanding the neural code is no simpler than understanding how the brain computes. In molecular biology, deciphering the genetic code ultimately led to understanding the mechanisms behind gene transcription but I think in neuroscience it may be the other way around.
Tuesday, July 08, 2008
Crime and neuroscience
The worry in the criminal justice system is that people will start trying to use neuroscience in their defense. For example, they will start to claim that their brain was faulty and it committed the crime. I think some have already tried this. I think the only way out of this predicament is to completely reframe how we administer justice. Currently, the intent of the perpetrator to commit the crime (negligence included as a crime) is required to establish criminal activity. This is why insanity is a viable defense. I think this notion will gradually become obsolete as the general public comes to accept the mechanistic explanation of mind. When the discontinuity between man and machine is finally accepted by most people, the question of intent is going to be problematic. That is why we need to start rethinking this whole enterprise now.
My solution is that we should no longer worry about intent or even treat justice as a form of punishment. What we should do in a criminal trial is to determine if the defendant a) actually participated in the crime and b) if they are dangerous to society. By this reasoning, putting a person in jail is only necessary if they are dangerous to society. For example, violent criminals would be locked up and they would only be released if it could be demonstrated that they were no longer dangerous. This schema would also eliminate the concept of punishment. The duration of a jail sentence should not be about punishment but only about benefit to society. I don't believe that people should be absolved of their crimes. For nondangerous criminals, some form of retribution in terms of garnished wages or public service could be imposed. Also, if some form of punishment can be shown to be a deterrent then that would be allowed as well. I'm sure there are many kinks to be worked out but what I am proposing is that a fully functional legal system could be established without requiring a moral system to support it.
This type of legal system will be necessary when machines become sentient. Due to the theorems of Godel and Turing, proving that a machine will not be defective in some way will be impossible. Thus, some machines may commit crimes. Given that they are sentient also means that we cannot simply go around and disconnect machines at will. Each machine deserves a fair hearing to establish guilt and sentencing. Given that there will not be any algorithmic way to establish with certainty whether or not the machine will repeat the crime, justice for machines will have to be administered in the same imperfect way it is administered for humans.
My solution is that we should no longer worry about intent or even treat justice as a form of punishment. What we should do in a criminal trial is to determine if the defendant a) actually participated in the crime and b) if they are dangerous to society. By this reasoning, putting a person in jail is only necessary if they are dangerous to society. For example, violent criminals would be locked up and they would only be released if it could be demonstrated that they were no longer dangerous. This schema would also eliminate the concept of punishment. The duration of a jail sentence should not be about punishment but only about benefit to society. I don't believe that people should be absolved of their crimes. For nondangerous criminals, some form of retribution in terms of garnished wages or public service could be imposed. Also, if some form of punishment can be shown to be a deterrent then that would be allowed as well. I'm sure there are many kinks to be worked out but what I am proposing is that a fully functional legal system could be established without requiring a moral system to support it.
This type of legal system will be necessary when machines become sentient. Due to the theorems of Godel and Turing, proving that a machine will not be defective in some way will be impossible. Thus, some machines may commit crimes. Given that they are sentient also means that we cannot simply go around and disconnect machines at will. Each machine deserves a fair hearing to establish guilt and sentencing. Given that there will not be any algorithmic way to establish with certainty whether or not the machine will repeat the crime, justice for machines will have to be administered in the same imperfect way it is administered for humans.
Friday, July 04, 2008
Oil Consumption
I thought it would be interesting to convert the amount of oil consumed by the world each year into cubic kilometres to give a sense of scale of how much oil we use and how much could possibly be left. The world consumes about 80 million barrels of oil a day. Each barrel is 159 litres and a cubic kilometre corresponds to 10^12 litres, so the amount of oil the world consumes in a year is equivalent to 4.6 cubic kilometres. This would correspond to a cubic pit with sides that are about 1.45 km long. The US consumes a little over a quarter of the world's oil, which is about 1.2 cubic kilometres. The Arctic National Wildlife Refuge is estimated to have about 10 billion barrels of recoverable oil, about a half of year's supply.
Proven world reserves amount to about 1.3 trillion barrels of oil or about 200 cubic kilometres. If we continue at current rates of consumption, then we have about 40 years of oil left. If the rest of the world decided to use oil like American's then the world's yearly consumption could increase by a factor of 5, which would bring us down to 8 years worth of reserve. However, given that the surface of the earth is about 500 million square kilometres, it seems plausible that there is a lot more oil out there that hasn't been found, especially under the deep ocean. The main constraint is cost and greenhouse gas emissions. We may not run out of oil anytime soon but we may have run out of cheap oil already.
Proven world reserves amount to about 1.3 trillion barrels of oil or about 200 cubic kilometres. If we continue at current rates of consumption, then we have about 40 years of oil left. If the rest of the world decided to use oil like American's then the world's yearly consumption could increase by a factor of 5, which would bring us down to 8 years worth of reserve. However, given that the surface of the earth is about 500 million square kilometres, it seems plausible that there is a lot more oil out there that hasn't been found, especially under the deep ocean. The main constraint is cost and greenhouse gas emissions. We may not run out of oil anytime soon but we may have run out of cheap oil already.
Subscribe to:
Posts (Atom)