The man with the pistol will be a dead man. Are we just rephrasing the old metonymic adage coined by the English author Edward Bulwer Lytton that โThe pen is mightier than the swordโ? Recently, mocking AI doomsayers, Yann LeCun posted on his twitter account the following:
- Engineer: I invented this new thing. I call it a ballpen
- TwitterSphere: OMG, people could write horrible things with it, like misinformation, propaganda, hate speech. Ban it now!
- Writing Doomers: imagine if everyone can get a ballpen. This could destroy society. There should be a law against using ballpen to write hate speech. regulate ballpens now!
- Pencil industry mogul: yeah, ballpens are very dangerous. Unlike pencil writing which is erasable, ballpen writing stays forever. Government should require a license for pen manufacturers.
As expected, LeCun's post drew both praise and ridicule from the ever increasing multitude of AI experts crowding social media. The world seems divided between those who applaud the new generative AI and LLMs for their unparalleled benefits and those who argue that, if left unregulated, they pose a threat to democracy.
It is easy to list some of the positives that even the critics acknowledge AI and the all-powerful LLMs can bring, such as:
Potential advancements in education.
Improvements in programming, coding, and faster article editing.
Similarly, it is easy to list some of the negatives:
Ease of creating fake news.
Hallucinations, meaning that these systems may produce fabricated and incorrect information.
The one point on which both groups largely agree is that these systems, despite their power, are rather unintelligent. Personally, I prefer to call them AGETFYI (artificial good enough to fool you intelligence) instead of AGI, because though some people seem to think they are quite smart, they are not truly intelligent and still have a long way to go. In fact, there are at least a couple of points on which experts from both sides seem to concur: these LLMs are relatively dumb and concerns about "human extinction" are exaggerated.
However, when it comes to the dissemination of fake news (whether intentional or due to hallucinations), the situation changes. Yann LeCun, firmly in the "AI is good" camp, argues that the fear of AI facilitating the production of fake news is unfounded.
According to him, there are two bottlenecks: passing through the content moderation systems of dissemination media (which automatically removes fake news) and capturing the limited attention the public has, and modern AI does not help with either. However, others (for example Gary Marcus) rebut that you can overload moderation systems because modern AI makes it so much easier to produce fake content and the amount of fake news can grow so much to overload those systems. If 0.1% of bad content can go through now, multiply that by 1000 and you get the equivalent of what would be 10% of the content today. As Gary Marcus and others point out, you donโt need to be smart to be dangerous. Their lack of intelligence does not imply these systems will be inoffensive.
Just a few weeks ago, the English edition of Le Monde published an article that began with these exact words: โYou may have seen a picture of Donald Trump being arrested by police officers at the foot of Trump Tower in New York, or one of French President Emmanuel Macron eating mud, or, perhaps you saw the Pope wearing a luxury puffer coat. These pictures have been shared thousands, and in some cases millions, of times on social media since mid-March โ but they are all fakeโ.
Such examples may become even more prevalent in the future until they are identified as fakes. Akin to the scenario involving guns, when a man with a 0.45 meets a man with a rifle, the man with the pistol will be a dead man, we also have that if a man with a rifle meets a man with a fully automatic weapon, the man with the rifle will be a dead man, and, more importantly, if a person reading the news is faced with only one authoritative news outlet among a thousand, that man is likely to be a misinformed man. Or become an easily manipulable individual.
Awesome read !
I am also seeing the rise of fake news and the way misinformation is increasing rapidly. And we will be seeing people using AI to do it in upcoming years. There needs to be regulation of content, data governance and proper systems to differentiate between truth and the misinformation online. By the time proper systems are in place, misinformation and it's complexity will be on another level of automation.