Montague in the epiologue of his book, which I blogged about recently, argued that a marriage of psychology and physics is in order. His thesis is that our intuitions about the world are based on flawed systems that were only designed to survive and reproduce. There are several responses to his argument. The first is that while we do rely on intuition to do math and physics, the intuition is based on learned concepts more than our "primitive" intuitions. For example, logical inference itself is completely nonintuitive. Computer scientist Scott Aaronson gives a nice example. Consider cards with a letter on one side and a number on the other. You are given 2, K, 5, J and the statement every J has a 5 on the back. Which cards do you need to turn over to see if the statement is true? (Hint: If A implies B then the only conclusion you can draw is not B implies not A). Most college freshmen get this wrong. Quantum mechanics and thermodynamics are notoriously counter intuitive and difficult to understand. We were led to these theories only after doing careful experiments that could marginalize away our prior beliefs.

However, that is not to say that perhaps we're at a stumbling block over questions like what is dark matter or why do we remember the past and not the future because of some psychological impediment. A resolution to this issue could reside again on whether or not physics is computable. Montague doesn't think so but his examples do not constitute a proof. Now if physics is computable and the brain is governed by the natural laws of physics, then the brain is also computable. In fact, this is the simplest argument to refute all those that doubt that machines can ever think. If they believe that the brain is in the natural world and physics can be simulated then we can always simulate the brain and hence the brain is computable. Now if the brain is computable, then any phenomenon in physics can be understood by the brain or at least computed by the brain. In other words, if physics is computable then given any universal Turing machine, we can compute any physical phenomenon (given enough time and resources).

There is one catch to my argument and that is the fact that if we believe the brain is computable then we must also accept that it is finite and thus less powerful than a Turing machine. In that case, there could be computations in physics that we can't understand with our finite brains. However, we could augment our brains with extra memory (singularity anyone) to complete a computation if we ever hit our limit. The real question is again tractability. It could be possible that some questions about physics are intractable from a purely computational point of view. The only way to "understand" these things is to use some sort of meta-reasoning or some probabilistic algorithm. It may then be true that the design of our brains through evolution may impede our ability to understand concepts that are outside of the range it was designed for.

Personally, I feel that the brain is basically a universal Turing machine with a finite tape so that it can do all computations up to some scale. We can thus only understand things with a finite amount of complexity. The way we approach difficult problems is to invent a new language to represent complex objects and then manipulate the new primitives. Thus our day to day thinking uses about the same amount of processing but accumulated over time we can understand arbitrarily difficult concepts.

## Tuesday, June 17, 2008

## Tuesday, June 10, 2008

### Why so slow?

John Tierney of the New York times shows a figure from Ray Kurzweil of a log-log plot of the time between changes in history, such as the appearance of life multicellular organisms to new technologies like televisions and computers. His graph shows power law scaling with an exponent of negative one, which I obtained by eyeballing the curve. In other words, if dT is the time between the appearance of the next great change then it scales as 1/T where T is the time. I haven't read Kurzweil's book so maybe I'm misinterpreting the graph. The fact that there is scaling over such a long time is interesting but I want to discuss a different point. Let's take the latter part of the curve regarding technological innovation. Kurzweil's argument is that the pace of change is accelerating so we'll soon be enraptured in the Singularity (see previous post). However, the rate of appearance of new ideas seems to be only increasing linearly with T. So the number of new ideas are accumulating as T^2, which is far from exponential. Additionally, the population is increasing exponentially (at least in the last few hundred years). Hence the number of ideas per person is obeying t^2 Exp(-t). I'm not sure where we are on the curve but after an initial increase, the number of ideas per person actually decreases exponentially. I was proposing in the last post that the number of good ideas was scaling with the population but according to Kurzweil I was being super optimistic. Did I make a mistake somewhere?

## Friday, June 06, 2008

### The singularity

Steve Hsu points to an IEEE Spectrum special report on The Singularity, which is variously described as the point in time when machine intelligence surpasses human intelligence so that machines can then go on to improve themselves without our help. The super optimists like Ray Kurzweil think this will happen in 15 years and when it happens machine intelligence will increase at doubling times that will be hours, minutes or shorter. The machines will then solve all of the world's problems instantly. He also thinks we can upload our minds into the machine world and be immortal.

Personally, I think that there is no question that humans are computable (in the Turing sense) so there is no reason that a machine that can think like we do will someday exist (we being an existence proof). I have no idea when this will happen. Having studied computational neuroscience for about 15 years now, I can say that we aren't that much closer to understanding how the brain works then we were back then. I have a personal pet theory, which I may expound on sometime in the future, but it's probably no better than anyone else's. I'm fairly certain machine intelligence won't happen in 15 years and may not in my lifetime.

The argument that the singularity enthusiasts use is a hyper-generalization of Moore's law of exponential growth in computational power. They apply the law to every thing and then extrapolate. For example, anything that doubles each year (which is a number Kurzweil sometimes uses) will improve by a factor of 2^15=32,000 in 15 years. To Kurzweil, we are just a factor of 2^15 away from singularity.

There are two quick responses to such a suggestion, the first is where did 2^15 come from and the second is nonlinear saturation. I'll deal with the second issue first. In almost every system I've ever dealt with there is usually some form of nonlinear saturation. For example, some bacteria can double every 20 minutes. If it weren't for the fact that they run out of food eventually and stop growing (i.e. nonlinear saturation) a single colony would cover the earth in less than a week. Right now components on integrated circuits are less than 100 nm in size. Thus in less than 10 doublings they will be smaller than atoms. Hence, Moore's law as we know it can only go on for another 20 years at most. To continue the pace beyond that, we will require a technological paradigm shift and there is no successor on the horizon. The singularists believe in the Lone Ranger hypothesis so something will come to the rescue. However, even if computers do get faster and faster, software is not improving at anything near the same pace. Arguably, Word is worst now then it was 10 years ago. My computer still takes a minute to turn on. The problem is that good ideas don't seem to be increasing exponentially. At best they only seem to scale linearly with the population.

That leads us to the first point. How far away are we from building a thinking machine? The answer is that we haven't a clue. We may just need a single idea or it might be several. Over the past 50 years or so we've really only had a handful of truly revolutionary ideas about neural functioning. We understand a lot about the machinery that the brain runs on but very little about how it all works together to create human intelligence. We are making progress but it's slow. However, nonlinearity could help us here because we may be near a bifurcation to take us to a new level of understanding. However, this is not predictable by exponential growth.

The other thing about the singularity is that the enthusiasts seem to think that intelligence is unlimited, so that thinking machines can instantly solve all of our problems. Well if physics is computable (see previous post), then no amount of intelligence can solve the Halting problem or Hilbert's tenth. If we believe that P is not equal to NP, then no hyper intelligence can solve intractable problems, unless the claim extends to the ability to compute infinitely fast. I would venture that no amount of intelligence will ever settle the argument of who was the greatest athlete of all time. Many of our problems are due to differences in prior beliefs and that can't be solved by more intelligence. We have enough wealth to feed everyone on the planet yet people still starve. Unless the singularity implies that machines control all aspects of our lives, there will be human problems that will not be solved by extra intelligence.

The example enthusiasts sometimes give of a previous singularity is the emergence of humans. However, from the point of view of the Dodo bird, humpback whale, buffalo, polar bear, American elm tree, and basically most other species, life got worse following the rise of the humans. We're basically causing the greatest extinction event since the fall of the dinosaurs. So who's to say that when the machine singularity strikes, we won't be left behind similarly. Machines may decide to transform the earth to suit their needs and basically ignore ours.

Personally, I think that there is no question that humans are computable (in the Turing sense) so there is no reason that a machine that can think like we do will someday exist (we being an existence proof). I have no idea when this will happen. Having studied computational neuroscience for about 15 years now, I can say that we aren't that much closer to understanding how the brain works then we were back then. I have a personal pet theory, which I may expound on sometime in the future, but it's probably no better than anyone else's. I'm fairly certain machine intelligence won't happen in 15 years and may not in my lifetime.

The argument that the singularity enthusiasts use is a hyper-generalization of Moore's law of exponential growth in computational power. They apply the law to every thing and then extrapolate. For example, anything that doubles each year (which is a number Kurzweil sometimes uses) will improve by a factor of 2^15=32,000 in 15 years. To Kurzweil, we are just a factor of 2^15 away from singularity.

There are two quick responses to such a suggestion, the first is where did 2^15 come from and the second is nonlinear saturation. I'll deal with the second issue first. In almost every system I've ever dealt with there is usually some form of nonlinear saturation. For example, some bacteria can double every 20 minutes. If it weren't for the fact that they run out of food eventually and stop growing (i.e. nonlinear saturation) a single colony would cover the earth in less than a week. Right now components on integrated circuits are less than 100 nm in size. Thus in less than 10 doublings they will be smaller than atoms. Hence, Moore's law as we know it can only go on for another 20 years at most. To continue the pace beyond that, we will require a technological paradigm shift and there is no successor on the horizon. The singularists believe in the Lone Ranger hypothesis so something will come to the rescue. However, even if computers do get faster and faster, software is not improving at anything near the same pace. Arguably, Word is worst now then it was 10 years ago. My computer still takes a minute to turn on. The problem is that good ideas don't seem to be increasing exponentially. At best they only seem to scale linearly with the population.

That leads us to the first point. How far away are we from building a thinking machine? The answer is that we haven't a clue. We may just need a single idea or it might be several. Over the past 50 years or so we've really only had a handful of truly revolutionary ideas about neural functioning. We understand a lot about the machinery that the brain runs on but very little about how it all works together to create human intelligence. We are making progress but it's slow. However, nonlinearity could help us here because we may be near a bifurcation to take us to a new level of understanding. However, this is not predictable by exponential growth.

The other thing about the singularity is that the enthusiasts seem to think that intelligence is unlimited, so that thinking machines can instantly solve all of our problems. Well if physics is computable (see previous post), then no amount of intelligence can solve the Halting problem or Hilbert's tenth. If we believe that P is not equal to NP, then no hyper intelligence can solve intractable problems, unless the claim extends to the ability to compute infinitely fast. I would venture that no amount of intelligence will ever settle the argument of who was the greatest athlete of all time. Many of our problems are due to differences in prior beliefs and that can't be solved by more intelligence. We have enough wealth to feed everyone on the planet yet people still starve. Unless the singularity implies that machines control all aspects of our lives, there will be human problems that will not be solved by extra intelligence.

The example enthusiasts sometimes give of a previous singularity is the emergence of humans. However, from the point of view of the Dodo bird, humpback whale, buffalo, polar bear, American elm tree, and basically most other species, life got worse following the rise of the humans. We're basically causing the greatest extinction event since the fall of the dinosaurs. So who's to say that when the machine singularity strikes, we won't be left behind similarly. Machines may decide to transform the earth to suit their needs and basically ignore ours.

Subscribe to:
Posts (Atom)