AGI has been feared by many to become the technological terrors portrayed by the Terminator franchise. However, like a student passing out of school and being deemed eligible, AGI has been put through some tests as well. Here are a few of them:
The Turing test ($100,000 Loebner prize interpretation)
The Turing test was proposed in Turing (1950) and has many interpretations. One specific interpretation is provided by the conditions for winning the $100,000 Loebner Prize. Since 1990, Hugh Loebner has offered $100,000 to the first AI program to pass this test at the annual Loebner Prize competition. Smaller prizes are given to the best-performing AI program each year, but no program has performed well enough to win the $100,000 prize.
The exact conditions for winning the $100,000 prize will not be defined until a program wins the $25,000 “silver” prize, which has not yet been done. However, we do know the conditions will look something like this:
A program will win $100,000 if it can fool half the judges into thinking it is human while interacting with them in a freeform conversation for 30 minutes and interpreting audio-visual input.
The coffee test
Stephen Wozniak suggest a (probably) more difficult test — the “coffee test” — as a potential operational definition for AGI:
Go into an average American house and figure out how to make coffee, including identifying the coffee machine, figuring out what the buttons do, finding the coffee in the cabinet, etc.
If a robot could do that, perhaps we should consider it to have general intelligence.
The robot college student test
Goertzel suggests a more challenging operational definition, the “robot college student test”:
When a robot can enroll in a human university and take classes in the same way as humans, and get its degree, then we’ll say that we’ve created an artificial general intelligence.
The employment test
Nils Nilsson, one AI’s founding researchers, once suggested an even more demanding operational definition for “human-level AI” (what we’ve been calling AGI), the employment test.
Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test:
AI programs must… [have] at least the potential [to completely automate] economically important jobs. To develop this operational definition more completely, one could provide a canonical list of “economically important jobs,” produce a special vocational exam for each job (e.g. both the written and driving exams required for a U.S. commercial driver’s license), and measure machines’ performance on those vocational exams.
This is a bit “unfair” because I doubt that any single human could pass such vocational exams for any long list of economically important jobs. On the other hand, it’s quite possible that many unusually skilled humans would be able to pass all or nearly all such vocational exams if they spent an entire lifetime training each skill, and an AGI — having near-perfect memory, faster thinking speed, no need for sleep, etc. — would presumably be able to train itself in all required skills much more quickly, if it possessed the kind of general intelligence we’re trying to operationally define.
Overall, this shows how we as a community think about the uniqueness of our functioning, however we need to come to a consensus on what defines AGI and the boundaries up till which we can experiment with it.
Developed in response to a school project, Rohan, Suvana and I created PRECaRiOUS, a blog which aimed and raising awareness and ultimately answering the question:
How will the development of Artificial General Intelligence (AGI) be an infringement of human rights?
A lot of that content is still relevant to this blog, which is why I have adapted the same posts onto a mini-series on this blog.