The history of the world underwent a profound change in the mid-40s with the realisation of atomic power by a group of leading physicists.
The Manhattan Project, a research and development endeavor, successfully produced and tested the first nuclear weapon in 1945, altering the course of history. Since then, the world has never been the same. One of the most dreaded outcomes resulting from the scientific advancements of that era is the fear of a nuclear winter following a large-scale nuclear war, which could lead to human annihilation.
However, amidst these years, something else emerged that could have even greater consequences for humanity. While Robert Oppenheimer and Enrico Fermi are nearly household names, few people would recall the names of Warren McCulloch and Walter Pitts, yet their legacy may have repercussions surpassing that of the fathers of the atomic bomb.
In 1943, McCulloch, a neurophysiologist, and Walter Pitts, a cognitive psychologist, published the paper "A logical calculus of the ideas imminent in nervous activity" (https://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf). This marked the birth of the McCulloch-Pitts neuron, the first mathematical model of a biological neuron, which laid the groundwork for the development of artificial neural networks.
The first instance of an artificial neural network was known as the perceptron, invented by Frank Rosenblatt in 1957. However, due to its simplicity, it became evident that the perceptron could not solve complex problems effectively.
Neural networks experienced periods of great popularity as well as periods during which they were overshadowed by other algorithms that seemed to yield more robust results. The last winter of neural networks persisted until the mid-2000s, when a significant resurgence of research in this field began. In particular, the year 2006 witnessed the publication of a seminal paper by Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh titled "A fast learning algorithm for deep belief nets" (https://www.cs.toronto.edu/~hinton/absps/fastnc.pdf). Notably, during that time, research in neural networks was primarily, if not exclusively, conducted in universities, rather than at companies like Google or Facebook.
However, as neural networks grew more complex, ushering in the era of "Deep Learning" (where "deep" denotes the number of layers comprising these networks), and demonstrated their superiority over other machine learning methods in competitions focused on computer vision and automated speech recognition, they began to gain recognition as state-of-the-art systems. This caught the attention of the industry.
Leading tech companies such as Google, Facebook, Microsoft, Apple, and others started investing heavily in artificial intelligence research.
Against this backdrop, OpenAI was founded in 2015. The co-founders were Elon Musk, who served as the CEO of Tesla, and Sam Altman, who held the position of president at Y Combinator, one of the most renowned American startup accelerators that had supported companies like Stripe, Airbnb, and Dropbox.
An article published by Wired in April 2016 (https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free) bore the subtitle: "OpenAI wants to give away the 21st century's most transformative technology. In the process, it could remake the way people make tech."
As its name suggests, OpenAI aspired to develop an artificial intelligence technology that would be open source and accessible to everyone.
They began to attract the best minds, not merely by offering lucrative salaries, but by presenting the opportunity to engage in research focused solely on the future, unencumbered by product development and quarterly earnings. The Wired article articulates that OpenAI aimed to eventually share most, if not all, of its research with anyone interested. In other words, Musk, Altman, and their team aspired to give away what could potentially become the most transformative technology of the 21st century, providing it free of charge.
The underlying idea was to prevent a few dominant companies from monopolizing the emerging technologies that shaped the advancement of artificial intelligence by making AI research accessible to everyone.
However, the article emphasises another aspect, one that holds even greater relevance today. When Musk and Altman unveiled OpenAI, they also positioned the project as a means to neutralize the threat posed by a malevolent artificial superintelligence. While acknowledging the possibility that such a superintelligence could emerge from OpenAI's technology, they asserted that the widespread usability of the technology would mitigate any potential threats. Altman states, "We think it's far more likely that many, many AIs will work to stop the occasional bad actors."
The landscape shifted rapidly: in 2018, Elon Musk departed from OpenAI due to a clear divergence in vision from the rest of the top management. More recently, OpenAI announced the establishment of a for-profit entity and formed stronger ties with Microsoft. With the funding and computing power provided by Microsoft, OpenAI released ChatGPT at the end of 2022, injecting new momentum into AI development while also highlighting potential dangers and contradictions.
On the other hand, it is important to note that large tech companies like Google and Facebook have not always functioned as impenetrable behemoths zealously guarding their research. In fact, the progress of ChatGPT owes much to the advent of Large Language Models (LLMs) based on the attention mechanism, a deep learning architecture that originated from Google Brain and Google Research and was promptly made public.
PyTorch and TensorFlow, the two most widely used frameworks in the deep learning field, originate from Facebook and Google, respectively. These frameworks are completely open and available for use by anyone. The openness and accessibility of these key technologies from major tech companies have played a vital role in the rapid advancement of AI in recent years.
Contrarily, OpenAI has not released any open-source code for its ChatGPT model, nor has it provided transparency regarding the data used to train the model. There are concerns that OpenAI may have utilised copyrighted material without compensating the rightful content owners. Similar to Bob Beamon's long jump world record in 1968, where he exceeded all expectations, OpenAI raised the bar significantly with ChatGPT and attempted to maintain an advantage by withholding their results. However, unlike Beamon's record, which stood for 23 years, other models similar to ChatGPT have emerged, including Google's new model called Bard. The potential losers in this rapid advancement may not be Google and Facebook, but rather the spirit of open-source collaboration, which could be dealt a severe blow detrimental to the progress of artificial intelligence.
The implications of this scenario are yet to be fully realized. A leaked Google document titled "We have no moat and neither does OpenAI" (https://www.semianalysis.com/p/google-we-have-no-moat-and-neither) offers some solace and a source of optimism. The document suggests, "the uncomfortable truth is, we aren't positioned to win this arms race, and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch. I'm talking, of course, about open source. Plainly put, they are lapping us."
If one of OpenAI's objectives was indeed "to give away the 21st century's most transformative technology," this article implies that their departure from this ideal may not impede the progress of open-source initiatives.
Fortunately, the second objective of OpenAI, "to neutralize the threat of a malicious artificial superintelligence," has also garnered attention and support from a significant number of individuals. In March 2023, over 30,000 people signed an open letter advocating for increased regulation in the development of AI. Esteemed figures such as Elon Musk, Joshua Bengio, Steve Wozniak and Gary Marcus endorsed this initiative.
The European Union has taken a leading role in addressing AI risks by proposing the AI Act, the first major regulatory law specifically targeting AI. The law classifies AI applications into three risk categories:
Applications and systems that pose an unacceptable risk, such as government-operated social scoring, are outright banned.
High-risk applications, like a CV-scanning tool that ranks job applicants, are subject to specific legal requirements.
Applications not explicitly banned or classified as high-risk remain largely unregulated.
In the United States, Congress is also poised to consider new bills concerning artificial intelligence to regulate and mitigate potential risks associated with this transformative technology.
It is plausible that harnessing atomic power during World War II may not have been the greatest threat to humanity. Similar to how nuclear power has been carefully regulated, it appears necessary to adopt a similar approach in harnessing the power of artificial intelligence. The regulation and responsible development of AI are essential to ensure the technology's positive impact while mitigating any potential risks it may pose.