Machines outsmarting humans has long been a fancy of science fiction. With the rapid improvements in artificial intelligence (AI) programmes over the past decade, some experts believe that what was once fiction could become reality.
One of those experts, Jensen Huang, the chief executive of Nvidia, the world’s biggest manufacturer of computer chips, claimed that today’s models could advance to the point of so-called artificial general intelligence (AGI) within five years.
But this raises the question of what exactly AGI is.
Huang argues that in order to be considered as an AGI, a programme must be able to do 8% better than most people at certain tests, such as bar exams for lawyers.
This is just the latest in a long line of definitions. In the 1950s, Alan Turing, the famous British mathematician, said that talking to a model that achieved AGI would be indistinguishable from talking to a human. Some would argue that the most advanced large language models already pass this Turing test.
However, others such as Mustafa Suleyman, co-founder of DeepMind, an AI research firm, believe that AGI will have been reached when a model is given $100,000 and turns it into $1m without instructions.
There are of course those who completely reject the concept of AGI. One of those is Mike Cook of King’s College London. Cook believes the term has no scientific basis and means different things to different people. Harry Law of the University of Cambridge, agrees that few definitions of AGI attract consensus, but contends that most are based on the idea of a model that can outperform humans in most tasks-whether this be making tea or making lots of money.
To add to the confusion, researchers at DeepMind have proposed six levels of AGI, ranked by the proportion of skilled adults that a model can outperform. They say that the technology has reached only the lowest level, with AI tools equal to or slightly better than an unskilled human.
The question of what happens when we actually do reach AGI is one that divides people almost as much as what AGI actually is. Some such as computer scientist Eliezer Yudkowsky, fear that by the time people actually realise we’ve reached AGI, humanity will be enslaved. Others argue that that is a far too pessimistic view, and that AI is currently following human inputs poorly.
A kicker in this whole debate comes from the fact that whilst the experts may not actually have an agreed definition of what AGI is, the plebs in the judiciary could soon decide on one for everyone. As part of a lawsuit lodged in February against OpenAI, a company he co-founded, Elon Musk has asked a court in California to decide whether the firm’s GPT-4 model shows signs of AGI. If it does, Musk claims, OpenAI has gone against its founding principles that it would only licence pre-AGI technology. OpenAI denies that it has gone against its principles. Musk is seeking a jury trial, and if his wish is granted, the non-experts will have to decide on something the experts have struggled with for half a century.
Leave a comment