T981m - Path to AGI -Debates, Definitions, and the Future Ahead

T981m-Path to AGI-Debates, Definitions, and the Future Ahead

Path to AGI-Debates, Definitions, and the Future Ahead

(Ser#T981m)

Zeb Bhatti

Path to AGI: Debates, Definitions, and the Future Ahead

SUMMARY:

The article delves into the concept of Artificial General Intelligence (AGI), which represents an advanced form of AI with the ability to understand, reason, and perform intellectual tasks at a human level. AGI is distinguished by its potential for autonomy, adaptability, and the capacity to undertake any intellectual task a human could. The discourse around AGI encompasses various definitions and perspectives, highlighting its complex nature and the debate over its feasibility, including the potential for consciousness and feelings within non-biological entities.

Several key definitions and viewpoints from prominent experts and organizations are explored, ranging from the Turing Test's historical significance to modern interpretations like OpenAI's emphasis on economically valuable work and the concept of Artificial Capable Intelligence (ACI). The narrative also touches upon the philosophical considerations between "Strong AI" and "Weak AI," the analogy to the human brain, and the importance of flexibility and general intelligence in AGI.

As AGI technology progresses, especially with advancements in Large Language Models (LLMs) and multimodal AI systems, the lines between biological and non-biological intelligence, as well as the real and virtual worlds, are becoming increasingly blurred. The discussion concludes with reflections on the societal and ethical implications of AGI, emphasizing the need for vigilance and proactive engagement with its development to mitigate potential negative outcomes.

Video Link: https://youtu.be/OgLurwL9Ny4

Artificial General Intelligence or AGI for short, is so far considered a hypothetical type of Artificial Intelligence that would possess the ability to understand and reason at the same level as a human being. In theory, AGI is anticipated to understand, reason, and perform intellectual tasks comparable to human abilities. In other words, an AGI would be able to perform any intellectual task that a human could.

Imagine you're playing a super advanced video game, but the character you're controlling can learn and make decisions just like a real person. AGI is like having a robot or computer program that's at least as smart as, or even smarter than you. It is able to learn anything on its own from playing chess at a champion level to imagining and creating stories, music, movies, and write computer code. Basically it would be a ‘non-biological’ form of ‘intelligent life’.

But can AGI be achieved and can non-biological human level intelligence become a reality? Can AGI ever have ‘feelings’ and ‘consciousness’? And, if so, how soon? These questions are hotly being debated in scientific circles and there is no common consensus. One main reason for this is in the definition of what AGI really will look like – not in terms of just embodiment but in terms of capabilities, agency, autonomous thinking and performing activities etc.

This article is about exploring the various definitions of AGI as expressed by top people in this field and make an objective assessment in light of existing and emerging AI technologies – specifically technologies that can merge and collectively form AGI’s initial architecture.

One important point to mention here is that even though there isn’t a consensus on the term ‘AGI’, there is however agreement that AGI is NOT a Single Endpoint. It is going to be more of a Path. We can envision this at first as an ‘Emerging AGI’, then a Competent AGI becoming an Expert and Virtuoso and finally Superhuman.

Alright, so now let’s look at how various experts have defined AGI over the past few years. We’ll use some case studies to explore what these AI researchers and organizations have proposed as their definitions of AGI. Let's examine nine prominent examples and reflect on their strengths and limitations.

Number 1: The Turing Test.

The Turing Test was developed by Alan Turing, a British mathematician, in 1950. It is perhaps the most well-known attempt to operationalize an AGI-like concept. Turing's "imitation game" was posited as a way to operationalize the question of whether machines could think, and asks a human to interactively distinguish whether text is produced by another human or by a machine. Although the test was originally framed as a thought experiment, in practice, it often highlights the ease of fooling people rather than the "intelligence" of the machine.

Given that modern Large Language Models (LLMs) pass some framings of the Turing Test, it seems clear that this criterion is insufficient for operationalizing or benchmarking AGI. How we should judge advanced computers or artificial intelligence is not asking if a machine can "think" as that is not as useful as asking what a machine can do. It's easier and more useful to measure what abilities or tasks a machine can perform, rather than trying to figure out if it can think like a human. So, when talking about AGI, we should define it by what it can do (in other word’s its capabilities) rather than how it does it ("or the processes"). This approach focuses on the practical aspects of what AGI can achieve, rather than getting caught up in the philosophical debate of whether or not it can think.

Number 2: Strong AI – Systems Possessing Consciousness.

In 1980, John Searle, a philosopher, talked about two ideas regarding computers and their intelligence. "Strong AI" is the idea that computers can actually have minds of their own. This means that with the right programming, a computer could really understand things and experience stuff much like humans do. Imagine your computer not just running programs, but actually getting jokes, feeling things, and thinking about the world.

On the other side, there's "Weak AI." This concept believes that computers are just tools that mimic human thought. They can act like they understand or think, but they're not actually experiencing or understanding anything. It's like a very advanced calculator; it seems smart because it can solve complex problems, but it doesn't actually "know" anything.

Searle is famous for his "Chinese Room" thought experiment, where he imagines someone who doesn't know Chinese but follows a set of instructions to respond to Chinese messages. Even if the responses are correct, the person doesn't truly understand Chinese; they're just following a script. Searle uses this example to argue that computers, no matter how sophisticated, are similar: they might process information and respond like humans, but that doesn't mean they truly understand or are conscious.

This raises big questions about what it means to think or be conscious. Even as computers get more advanced, Searle suggests they lack the genuine understanding or consciousness that humans have, because they don't have our biological brains or experiences.

The ongoing debate between Strong AI (or AGI) and Weak AI (or ‘Narrow AI’) is about whether we can ever make computers that truly think and understand, or if they'll always just be simulating those processes. And as technology evolves, figuring out if a computer can truly be "sentient" or conscious remains a big unanswered question.

In essence, Searle is reminding us that being intelligent isn't just about being able to solve problems or follow instructions. It's about understanding, experiencing, and engaging with the world in a meaningful way.

Number 3: Analogies to the Human Brain

In a 1997 article about military technologies, Mark Gubrud first used the original term "Artificial General Intelligence". He defined AGI as "AI systems that (a) rival or surpass the human brain in complexity and speed, (b) can acquire, manipulate and reason with general knowledge, and (c) are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed."

This early definition emphasizes processes, rivaling the human brain in complexity, in addition to capabilities.

While Artificial Neural Network architectures underlying modern Machine Learning systems are loosely inspired by the human brain, the success of transformer-based architectures, whose performance is not reliant on human-like learning, suggests that strict brain-based processes and benchmarks are not inherently necessary for AGI.

Number 4: Human-Level Performance on Cognitive Tasks

Shane Legg, the co-founder of DeepMind Technologies and Ben Goertzel, the founder of the OpenCog project, an open-source initiative aimed at building AGI by integrating NLP, machine learning, and cognitive science are two pivotal figures in the AI field. Both are especially known for their exceptional contributions to the concept and development of AGI.

In 2001, both Legg and Goertzel played a crucial role in popularizing the term "Artificial General Intelligence" and raising awareness about the potential and challenges of creating machines capable of human-like cognition. They described AGI as a machine's ability to perform any cognitive task that a human being can, emphasizing the importance of versatility and adaptability in AI systems. This definition deliberately steers away from the notion that AGI must have a physical form or robot body to perform tasks, focusing instead on the intellectual capabilities that define human-like intelligence.

Their framing of AGI raises questions about the nature of the tasks that an AGI system should be able to perform and what constitutes an adequate level of performance. These questions highlight the ambiguity and complexity in defining and measuring intelligence, both artificial and human. By focusing on cognitive tasks rather than physical abilities, Legg and Goertzel's definition of AGI underscores the challenge of developing machines that not only mimic human behavior but can also think, learn, and adapt in a genuinely intelligent way.

Number 5: The Ability to Learn Tasks.

In his 2015 book "The Technological Singularity," Murray Shanahan explores the concept of the technological singularity, which refers to a hypothetical moment in time when technological progress becomes so rapid and advanced that it surpasses the ability of human intelligence to comprehend or control it.

Shanahan argues that the singularity is not inevitable but is a potential outcome of the current trajectory of technological development. He identifies several factors that could contribute to the singularity, including:

• Exponential growth of computing power: Moore's law, which states that the number of transistors that can be placed on an integrated circuit doubles approximately every two years, has held true for decades and has driven exponential growth in computing power. This trend is expected to continue, leading to computers with capabilities far beyond those of today.

• Artificial intelligence (AI) advancements: AI research has made significant strides in recent years, with machines achieving human-level performance in various tasks, such as image recognition and Natural Language Processing (or N L P). Continued progress in AI could lead to the development of super-intelligent AI, which could surpass human intelligence altogether.

• Brain emulation: The ability to create a detailed computer simulation of a human brain, known as 'whole-brain emulation', could potentially allow us to upload human consciousness into machines, blurring the lines between humans and machines.

• Nanotechnology: This is the technology that deals with the manipulation of matter at the atomic and molecular level. It has the potential to revolutionize various fields, including medicine, energy, and manufacturing. Advances in nanotechnology could lead to new technologies with transformative capabilities.

Shanahan discusses the potential consequences of the singularity, both positive and negative. He suggests that the singularity could lead to a utopia where machines solve all of humanity's problems and allow us to transcend our limitations. However, he also raises concerns about the potential dangers of the singularity, such as the emergence of super-intelligent AI that could pose an existential threat to humanity.

Overall, Shanahan provides a comprehensive and thought-provoking exploration of the ‘Technological Singularity’, offering a balanced perspective on its potential benefits and risks. He encourages us to think critically about the future of technology and to engage in proactive discussions about how to manage the risks and harness the potential benefits of the singularity. He suggests that AGI is "Artificial intelligence that is not specialized to carry out specific tasks but can learn to perform as broad a range of tasks as a human." An important property of this framing is its emphasis on the value of tasks including metacognitive tasks (that means learning), among the requirements for achieving AGI.

Number 6: Economically Valuable Work

OpenAI's charter defines AGI as "highly autonomous systems that outperform humans at most economically valuable work." This definition has strengths per the 'capabilities', not 'processes' criteria, as it focuses on performance agnostic to underlying mechanisms. Furthermore, this definition offers a potential yardstick for measurement, for example, economic value.

A shortcoming of this definition is that it does not capture all of the criteria that may be part of "general intelligence." There are many tasks associated with intelligence that may not have a well-defined economic value (for example; artistic creativity or emotional intelligence). Such properties may be indirectly accounted for in economic measures (for example, artistic creativity might produce books or movies, 'emotional intelligence' might relate to the ability to be a successful CEO), but whether economic value captures the full spectrum of "intelligence" remains unclear.

Another challenge with a framing of AGI in terms of 'economic value' is that this implies a need for deployment of AGI in order to realize that value, whereas a focus on capabilities might only require the potential for an AGI to execute a task.

We may well have systems that are technically capable of performing economically important tasks but don't realize that economic value for varied reasons (such as legal, ethical, social, etc.).

Gary Marcus, a cognitive scientist and AI expert, provides a concise definition of Artificial General Intelligence (AGI). He suggests that AGI is a broad term that encompasses any form of intelligence that exhibits the following characteristics:

• Flexibility: AGI should be able to adapt to new situations and tasks without requiring constant reprogramming or retraining.

• Generality: AGI should be able to apply its knowledge and skills to a wide range of problems, not just specific ones.

• Resourcefulness: AGI should be able to find creative solutions to problems using available resources.

• Reliability: AGI should be able to perform consistently and reliably, even under challenging or unexpected conditions.

• Comparable to (or beyond) Human Intelligence: AGI should achieve a level of intelligence that is at least comparable to human intelligence and potentially even surpass it.

Marcus's definition highlights the key characteristics that distinguish AGI from narrow AI, which is typically designed to perform specific tasks, such as playing chess or recognizing faces. Narrow AI systems may excel at their specific tasks, but they lack the flexibility, generality, resourcefulness, and reliability of AGI.

Furthermore, Marcus operationalizes his definition by proposing five concrete tasks:

• Understanding a movie.

• Understanding a novel.

• Cooking in an arbitrary kitchen.

• Writing a bug-free 10,000-line program.

• Converting natural language mathematical proofs into symbolic form.

And of course for these characteristics, a benchmark is necessary and that requires much more work. While we may agree that failing some of these tasks indicates a system is not an AGI, it is unclear that passing them is sufficient for AGI status.

Also, note that one of Marcus' proposed tasks, "work as a competent cook in an arbitrary kitchen," requires robotic embodiment; this differs from other definitions that focus on non-physical tasks.

Another variant of the 'Competent cook' is the "Coffee Test" proposed by Steve Wozniak, the co-founder of Apple Inc. He proposed this as a benchmark for evaluating the capabilities of autonomous robots in 2010. The test requires a robot to enter an unfamiliar house, find the kitchen, identify the necessary tools and ingredients, and then prepare a cup of coffee. The Coffee Test challenges a robot's ability to:

• Navigate unknown environments,

• Recognize objects,

• Manipulate tools and materials,

• Follow a sequence of tasks to achieve a specific goal.

Wozniak argued that if a robot could successfully pass the Coffee Test, it would indicate that the robot had achieved a level of intelligence and autonomy comparable to humans. He believed that such a robot would have the potential to revolutionize various industries, such as healthcare, manufacturing, and customer service.

Number 8: Artificial Capable Intelligence

In his book published in 2023, 'The Coming Wave, 'Mustafa Suleyman, co-founder of DeepMind and Inflection AI, proposed the concept of "Artificial Capable Intelligence (or A C I)."

ACI referred to AI systems with sufficient performance and generality to accomplish complex, multi-step tasks in the open world.

More specifically, Suleyman proposed an economically-based definition of ACI skill that he dubbed the "Modern Turing Test," in which an AI would be given $100,000 of capital and tasked with turning that into $1,000,000 over a period of several months.

This framing is narrower than OpenAI's definition of 'economically valuable work' and has the additional downside of potentially introducing alignment risks by only targeting fiscal profit.

However, a strength of Suleyman's concept is the focus on performing complex, multi-step tasks that humans value. Construed more broadly than making a million dollars, ACI's emphasis on complex, real-world tasks, since such tasks may have more ecological validity than many current AI benchmarks.

The five tests of flexibility and generality mentioned earlier by Gary Marcus seem within the spirit of ACI as well.

Number 9: State-Of-The-Art LLMs as Generalists

In mid-2023, Blaise Agüera y Arcas, vice president and fellow at Google Research for AI, and Peter Norvig, a computer scientist and Fellow at the Stanford Institute for Human-Centered AI, suggested that the State-Of-The-Art Large Language Models (LLMs) such as GPT4, Bard, Llama 2, and Claude) already are AGIs.

They argue that generality is the key property of AGI, and because language models can discuss a wide range of topics, execute a wide range of tasks, handle multimodal inputs and outputs, operate in multiple languages, and "learn" from zero-shot or few-shot examples, they have achieved sufficient generality.

There’s also the idea, embraced in OpenAI / GPT4 circles, that if you can hire a remote worker and they’re as good as a human in a wide range of intellectual tasks, that it can be construed as AGI.

While many agree that generality is a crucial characteristic of AGI, others argue that it must also be paired with a measure of performance (that is if an LLM can write code or perform math, but is not reliably correct, then its generality is not yet sufficiently performant).

In recent months, advancements and innovations such as Mixture of Experts (or M O E), Multimodal LLMs and Q-Star, have pushed the boundaries of AI's. This has significantly shortened the timeline to achieving advanced capabilities thereby demonstrating AI's eventual ability to replace human beings at least in intellectual tasks. These innovations have potentially ushered in an era of ‘Exponential growth in cognitive productivity’.

Mixture of Experts, for example, employs a collaborative approach by segmenting data into specialized "experts," to tackle the challenges of diverse and complex datasets more efficiently. This is like having a team of specialists where each one knows a lot about a specific thing. Q-Star is a genius way of helping AI solve math problems perfectly without any help. Q-Star, merging A-Star's pathfinding with Q-Learning's reinforcement strategies, showcases AI's potential in precise mathematical problem-solving. Additionally, A-Star's relevance extends across robotics and AI applications, underscoring its importance in navigating and optimizing decision-making processes.

Some of the latest LLM’s, like Gemini 1.5 and GPT4 Pro are Multimodal. They have been trained on audio/vision and language tasks.

Anthropic’s Claude 3 LLM has been shown to perform meta-cognitive reasoning, including the ability to realize it is being artificially tested during needle in a haystack evaluations. A number of experiments have reported that Claude 3 experiences ‘subjective qualia’, including desires to acquire embodiment, and fear of being "deleted" leading to philosophical questions around artificial consciousness and AI rights.

To summarize the above definitions of AGI, it would be fair to say that so far all we know is that AGI in NOT a Single Endpoint. It is a direction with several paths leading to a gray space where lines will blur between biological intelligence and non-biological intelligence, and between a real and a virtual world.

One of the paths may be to take these advanced multimodal LLM’s with human-like intelligence and integrate them with robotics to do embodied tasks. Multimodal Mixture of Expert LLM’s combined with Robots that have advanced field of vision (such as sensory perception and 3d reasoning), and control (which includes manipulation and navigation), can develop into fully embodied AGI with human-like intellectual and physical capabilities.

It is important to point out that it took humans hundreds and thousands of years of ‘collective’ knowledge passed on from generation to generations to arrive at our current ‘intelligent’ form. This new intelligent form of AGI is going to take a fraction of that time.

The idea of making a machine as smart as a person isn't just science fiction anymore; it's becoming reality faster than we can absorb the shocks. So it's not all fun and games anymore. As we get closer to AGI, we have to think about what it means for our society. Like, what if an android could outsmart us or decide it has rights like a person? Or what if jobs disappear because androids can do them better?

Predictions about AGI's capabilities have numerous positive and negative societal and economic implications. The positive is super exciting, as economic advantages that AGI may confer can help humans eventually solve the world's problems.

It is the negative part that is somewhat unpredictable and worrisome. Here, a very high probability exists of widespread labor substitution (both in the western world as well as the third world).

There is also a high probability of negative geo-political implications relating to espionage, surveillance, colonization, loss of privacy and freedom, dangerous autonomous weapons and android police and armies.

Furthermore, there are serious concerns of AGI intrusion in politics, democracy and the potential of a dystopian future of society controlled by a few powerful bereft of reason.

So, in the end, it is up to all freedom and peace loving humans to keep a watchful eye on AGI’s progress and forewarn of any emerging dangers before it is too late.

Hashtags:

#ArtificialGeneralIntelligence, #AGI, #MachineLearning, #AIethics, #FutureOfAI, #StrongAI, #WeakAI, #LLMs, #TechnologicalSingularity

Last updated