

What do these AI researchers think is the hardest and most quintessentially human of the tasks listed, the one robots will have the most trouble doing because of its Olympian intellectual requirements? That’s right – AI research (80 years). Along the way they’ll beat humans at poker (four years), writing high school essays (ten years), be able to outrun humans in a 5K foot race (12 years), and write a New York Times bestseller (26 years). Average answers range from nearly fifty years off (for machines being able to do original high-level mathematical research) to only three years away (for machines achieving the venerable accomplishment of being able to outperform humans at Angry Birds). The authors give a bunch of different tasks, jobs, and milestones, and ask the researchers when AI will be able to complete them. The next thing we can take from this paper is a timeline of what will happen when. The moral of the story is: be less certain about this kind of thing. I don’t know what drugs they’re on, but they exist. Well actually, there are a few dozen conference-paper-presenting experts who think there’s a one hundred percent chance of human-level AI before that year. I can’t tell you how many people I’ve heard say “there’s no serious AI researcher who thinks there’s any chance of human-level intelligence before 2050”. It conveys the information that AI researchers are really unsure. This does convey more than zero information. This is less “AI experts have spoken and it will happen in 2062” and more “AI experts have spoken, and everything they say contradicts each other and quite often themselves”. Several experts thought there was basically a 100% chance of strong AI by 2035 others thought there was only a 20% chance or less by 2100. This makes it hard to argue AI experts actually have a strong opinion on this.Īlso, these averages are deceptive. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.Īs the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect.

They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!īut on its own this is a bit misleading. The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. Unlike Bostrom’s survey, this didn’t oversample experts at weird futurist conferences and seems to be a pretty good cross-section of mainstream opinion in the field. Grace et al ( New Scientist article, paper, see also the post on the author’s blog AI Impacts) surveyed 1634 experts at major AI conferences and received 352 responses.

I’ve been waiting for a new survey for a while, and now we have one. Since then, deep learning took off, AlphaGo beat human Go champions, and the field has generally progressed. A few years ago, Muller and Bostrom et al surveyed AI researchers to assess their opinion on AI progress and superintelligence.
