Charlie Stross has been answering questions on his blog.  I like Stross for taking reasoned, often contrarian positions on SF conundrums, and one of his answers was particularly arresting.  Someone asked what he’d do if he had an AI at his disposal for one hour which could think ten lifetimes’ worth of thoughts. 

His answer: “Ten lifetimes’ thinking in an hour isn’t a lot.  …That’s 700 person-years of work. Or a research team of 35 people working for 25 years. … Right now we’re seeing more than 700 genius-years of research go into each of these topics every year. 700 genius-years is nothing against the scale of our contemporary engine of progress.”

But (as he says in the ellipses) it’s not just scale; it’s that thinking isn’t the only thing you need to do in science… and therefore that a thinking machine is not what it’s cracked up to be.  A research team doesn’t sit down and think for years on end; it has to go out at least sometimes to test hypotheses, build prototypes, gather data.  Even a theoretical physicist has to wait for theories to be tested to see what trains of thought are reasonable.

All of this makes me more comfortable in my assertion that human-level AIs are a bad idea.  Something sitting in a box thinking is not a replacement for human beings, and not even that useful if it can retain computer-like speeds even as it handles human-level intelligence.