As a species, humans are constantly searching for the next big thing, the next moon landing, the best thing sliced bread. So, when OpenAI launched an early demo of ChatGPT last month, everyone was fluttering with excitement.
Here was an AI model that was being touted by some as a more humane conversational AI model. Indeed, various members of the tech world such as Aaron Levie, CEO of Box, were of the view that ChatGPT was “a rare moment in time where technology was offering a visible example of how everything is going to change.” Others such as Combinator co-founder Paul Graham, believed that something big was happening, and that as Alberto Romero, author of The Algorithmic Bridge called it, ‘it is the best chatbot in the world.’
But, does ChatGPT actually live up to the hype, or is it another false dawn?
Journalist Sharon Goldman conducted some research for VentureBeat, to examine just how good ChatGPT was. As her findings show, the AI chatbot has some safeguards placed within it to prevent it from spewing hateful content, including anti-semitic remarks. Furthermore, unlike other previous AI chatbots that seemed at first glance to be authorities on everything, ChatGPT is keen to enforce that it is simply a machine-learning model rather than an authority on the world.
Furthermore, OpenAI has been refreshingly honest from the beginning about the limitations facing ChatGPT, including that there are “no source of truth during Real life training, if the model becomes more cautious it could well end up declining questions it can answer correctly, and thirdly that supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
However, like many other chatbot AIs, ChatGPT does have a habit of spouting nonsense, that whilst believable is still nonsense.
What this means is that these models are trained to predict the next word for a ‘specific input’ not whether a fact they’re giving is correct or not.
Some such as technology analyst Benedict Evans asked ChatGPT to ‘write a bio for Benedict Evans.’ The result he said was “plausible but almost entirely untrue.”
Others such as Arvind Narayanam have pointed out that whilst people might be excited about ChatGPT for its possibilities for helping learning, unless you already know the answer to a query you’ve given it, you’ve got no way of knowing whether it’s telling you the truth or not. For instance, Narayanam asked the bot some information security questions, and whilst the answers were plausible, they were all “BS.”
This raises the question of whether human users are okay with some nonsense being spouted, if it looks and sounds good. As Gini Dietrich showed, those who are sticklers for good writing and solid provable facts, may well not be. But for the average user? Who knows. And therein lies the real risk.
Leave a Reply