Large language models (LLMs) like ChatGPT have ignited substantial discussions and debates since their inception. These tools represent a remarkable technological achievement, yet misconceptions about their capabilities persist. In this article, we aim to clarify some of these myths.
Myth 1: Meet ChatGPT, the new true AGI
When ChatGPT and Language Models (LLMs) first emerged, they were celebrated as the pinnacle of Artificial Intelligence (AI). They were even hailed as true AGI (Artificial General Intelligence). These models exhibited the ability to perform tasks previously deemed beyond the reach of technology, like answering questions and generating content in multiple languages. They could respond to spoken language queries, eliminating the need for complex technical code. However, AI experts soon warned that ChatGPT's "intelligence" was more about skillfully harnessing vast data to produce coherent responses, rather than true comprehension. LLMs could decipher queries without genuinely understanding them. As evidenced in โIs ChatGPT a Good Causal Reasoner? A Comprehensive Evaluationโ, ChatGPT exhibits a serious causal hallucination issue, often presuming causal relationships between events, whether they genuinely exist or not. Yann LeCun, a Turing Award winner, Professor at the Courant Institute of Mathematical Sciences at New York University and Vice-President, Chief AI Scientist at Meta, has famously described LLMs as an off-ramp on the highway towards human-level AI.
These models excel at statistical analysis of extensive datasets but cannot reason, plan, or possess subjective experiences like humans. While their outputs may appear intelligent, LLMs rely on data patterns rather than true conceptual understanding.
However, this wasn't the narrative for the average person. To the layperson, LLMs signified a leap towards AGI. In May 2023, Forbes portrayed ChatGPT-n models as "an AI that truly speaks your languageโnot just your words and syntax. Imagine an AI that understands context, nuance, and even humor." It emphasized that "This is no longer just a futuristic concept โ it's the reality of ChatGPT". OpenAI played along, publicising the achievements of ChatGPT-3.5 and ChatGPT-4, including their performance on various exams. In fact, several reports claimed that ChatGPT aced the bar exam with a score nearing the 90th percentile. Consequently, the perception of ChatGPT was not that of a powerful yet limited tool but an AI unlike any seen before.
Myth 2: If you are useful, you are intelligent
ChatGPT was marketed not as a statistical tool but as Artificial Intelligence. As users experienced the undeniable utility of these models, they began to conflate utility with genuine intelligence.
Recent studies, such as one from MIT demonstrating increased worker productivity due to ChatGPT, and BCG's findings on how ChatGPT benefits consultants, further solidified the belief in the intelligence of LLMs. This situation parallels the placebo effect in medicine, where the perception of improvement stems from expectations rather than the actual efficacy of treatment. The placebo effect can manifest even when the treatment has no therapeutic basis.
Consider the analogy of promoting a vacuum cleaner as "intelligent" because it cleans more effectively by distinguishing dirty from clean surfaces. If it indeed works better, people might attribute this success to the perceived intelligence of the vacuum, while, in reality, the improved performance results from enhanced suction power. So the expectation, in this case the assumed intelligence of the tool, is used as an explanation for the actual improved performance, even if the two aspects are unrelated.
In a similar vein, ChatGPT might enhance productivity but that does not equate to intelligence. Comparatively, using Microsoft Office tools instead of a basic notepad also boosts productivity, yet no one considers MS Office intelligent (remember Clippy?).
As numerous researchers have noted, there is a tendency to mistake performance for competence, as highlighted by Rodney Brooks, Professor of Robotics at MIT, in an interview with IEEE Spectrum.
In essence, it's crucial to differentiate between utility and intelligence โ something can be highly useful without being genuinely intelligent. However, when people expect a tool to possess certain attributes, such as intelligence, they tend to attribute positive performance to those attributes, even when another explanation exists.
Myth 3: You need to be intelligent to be dangerous
A common misconception surrounding LLMs is the question of their potential danger. Some envision vivid sci-fi scenarios where LLMs rebel against humanity, but the reality is very different. LLMs are not intelligent entities, leading some to argue that, lacking true intelligence, they cannot rebel against humans and do not pose a threat.
However, it is vital to recognize that dangers exist, though they may not lead to human extinction. Similar to cars, which do not cause human extinction but can be hazardous, LLMs carry their own set of potential risks.
Consider the regulations and safety measures imposed on automobiles, from age restrictions for drivers to mandatory seatbelts, airbags, and crashworthiness tests. Cars lack intelligence, yet they can be perilous. Car accidents consistently rank among the top 10 causes of death, surpassing homicides and HIV/AIDS. Because of that, restrictions and regulations are necessary, including the need to pass a test in order to be allowed to drive a car.
LLMs may not result in fatalities, but they can be dangerous in other ways. As demonstrated by a video of Stephen Fry, his voice was used without his consent to narrate a documentary, showcasing the potential for LLMs to create realistic videos and audio recordings featuring famous figures, thereby propagating disinformation. These tools can impersonate anyone in any language, increasing the potential for manipulation and deception. The risk of disseminating disinformation can grow exponentially. One could create videos of the Pope advocating genocide, the US President declaring an extraterrestrial invasion (remember the War of the Worlds by Orson Wells?), or Putin announcing a massive nuclear attack on Europe and the US, spreading fear among the population.
All of that can now be done easily, cheaply, convincingly and the technology is already here. HeyGen is a company that will create a video of you saying anything you want. While they will try to make sure that this tool will not be used to impersonate somebody else, it is clear that the technology already exists.
Gary Marcus, a Professor of psychology and neural science at NY University and a best-selling author, highlighted the primary threat of misinformation through AI in a TED talk, providing several examples. In an article in Wired, he explained that "[LLMs] are actually pretty dumb, and I still believe that. But there's a difference between power and intelligence. And we are suddenly giving them a lot of power."
In essence, intelligence is not a prerequisite for being a threat; powerful tools in the hands of those who understand how to wield them can be dangerous. Intelligence is not the sole factor.
In his TED talk, Marcus suggested a regulatory approach similar to the pharmaceutical industry's phased testing of new drugs. This approach could be applied to increasingly powerful LLMs, gradually testing them on larger and diverse user groups. This regulation is not a stifling of innovation but a means to mitigate potential negative effects, similar to practices in the medical sciences.
The balancing act of reality: Usefulness, Intelligence and Danger
In summary, LLMs are:
Useful
Not truly intelligent
Potentially dangerous
Not a threat to human extinction
These four elements are not mutually exclusive. It is crucial to acknowledge that, like cars, alcohol, and other regulated entities such as banking and medicine, LLMs require regulation. Regulation does not entail prohibition but aims to harness AI's benefits while mitigating potential adverse consequences, which, let's reiterate, do not include the annihilation of the human race.
In the ever-evolving landscape of AI, understanding these distinctions and the necessity for prudent regulation is paramount.