Okay, almost everything about the story is bogus. Let’s dig in:
It’s not a “supercomputer,” it’s a chatbot. It’s a script made to mimic human conversation. There is no intelligence, artificial or not involved. It’s just a chatbot.
Plenty of other chatbots have similarly claimed to have “passed” the Turing test in the past (often with higher ratings). Here’s a story from three years ago about another bot, Cleverbot, “passing” the Turing Test by convincing 59% of judges it was human (much higher than the 33% Eugene Goostman) claims.
It “beat” the Turing test here by “gaming” the rules — by telling people the computer was a 13-year-old boy from Ukraine in order to mentally explain away odd responses.
The “rules” of the Turing test always seem to change. Hell, Turing’s original test was quite different anyway.
As Chris Dixon points out, you don’t get to run a single test with judges that you picked and declare you accomplished something. That’s just not how it’s done. If someone claimed to have created nuclear fusion or cured cancer, you’d wait for some peer review and repeat tests under other circumstances before buying it, right?
The whole concept of the Turing Test itself is kind of a joke. While it’s fun to think about, creating a chatbot that can fool humans is not really the same thing as creating artificial intelligence. Many in the AI world look on the Turing Test as a needless distraction.
Oh, and the biggest red flag of all. The event was organized by Kevin Warwick at Reading University. If you’ve spent any time at all in the tech world, you should automatically have red flags raised around that name. Warwick is somewhat infamous for his ridiculous claims to the press, which gullible reporters repeat without question. He’s been doing it for decades.
Have a look at the original article from over at Techdirt to read some more analysis of why the claims are most likely false.