Steve Hsu points to an IEEE Spectrum special report on The Singularity, which is variously described as the point in time when machine intelligence surpasses human intelligence so that machines can then go on to improve themselves without our help. The super optimists like Ray Kurzweil think this will happen in 15 years and when it happens machine intelligence will increase at doubling times that will be hours, minutes or shorter. The machines will then solve all of the world's problems instantly. He also thinks we can upload our minds into the machine world and be immortal.
Personally, I think that there is no question that humans are computable (in the Turing sense) so there is no reason that a machine that can think like we do will someday exist (we being an existence proof). I have no idea when this will happen. Having studied computational neuroscience for about 15 years now, I can say that we aren't that much closer to understanding how the brain works then we were back then. I have a personal pet theory, which I may expound on sometime in the future, but it's probably no better than anyone else's. I'm fairly certain machine intelligence won't happen in 15 years and may not in my lifetime.
The argument that the singularity enthusiasts use is a hyper-generalization of Moore's law of exponential growth in computational power. They apply the law to every thing and then extrapolate. For example, anything that doubles each year (which is a number Kurzweil sometimes uses) will improve by a factor of 2^15=32,000 in 15 years. To Kurzweil, we are just a factor of 2^15 away from singularity.
There are two quick responses to such a suggestion, the first is where did 2^15 come from and the second is nonlinear saturation. I'll deal with the second issue first. In almost every system I've ever dealt with there is usually some form of nonlinear saturation. For example, some bacteria can double every 20 minutes. If it weren't for the fact that they run out of food eventually and stop growing (i.e. nonlinear saturation) a single colony would cover the earth in less than a week. Right now components on integrated circuits are less than 100 nm in size. Thus in less than 10 doublings they will be smaller than atoms. Hence, Moore's law as we know it can only go on for another 20 years at most. To continue the pace beyond that, we will require a technological paradigm shift and there is no successor on the horizon. The singularists believe in the Lone Ranger hypothesis so something will come to the rescue. However, even if computers do get faster and faster, software is not improving at anything near the same pace. Arguably, Word is worst now then it was 10 years ago. My computer still takes a minute to turn on. The problem is that good ideas don't seem to be increasing exponentially. At best they only seem to scale linearly with the population.
That leads us to the first point. How far away are we from building a thinking machine? The answer is that we haven't a clue. We may just need a single idea or it might be several. Over the past 50 years or so we've really only had a handful of truly revolutionary ideas about neural functioning. We understand a lot about the machinery that the brain runs on but very little about how it all works together to create human intelligence. We are making progress but it's slow. However, nonlinearity could help us here because we may be near a bifurcation to take us to a new level of understanding. However, this is not predictable by exponential growth.
The other thing about the singularity is that the enthusiasts seem to think that intelligence is unlimited, so that thinking machines can instantly solve all of our problems. Well if physics is computable (see previous post), then no amount of intelligence can solve the Halting problem or Hilbert's tenth. If we believe that P is not equal to NP, then no hyper intelligence can solve intractable problems, unless the claim extends to the ability to compute infinitely fast. I would venture that no amount of intelligence will ever settle the argument of who was the greatest athlete of all time. Many of our problems are due to differences in prior beliefs and that can't be solved by more intelligence. We have enough wealth to feed everyone on the planet yet people still starve. Unless the singularity implies that machines control all aspects of our lives, there will be human problems that will not be solved by extra intelligence.
The example enthusiasts sometimes give of a previous singularity is the emergence of humans. However, from the point of view of the Dodo bird, humpback whale, buffalo, polar bear, American elm tree, and basically most other species, life got worse following the rise of the humans. We're basically causing the greatest extinction event since the fall of the dinosaurs. So who's to say that when the machine singularity strikes, we won't be left behind similarly. Machines may decide to transform the earth to suit their needs and basically ignore ours.