AI, AGI, and ASI – All Confusion Removed

AI, AGI, and ASI – All Confusion Removed !

Since the 2009 AGI conference, many common response from experts about AGI have changed from ‘Will never happen’, ‘Will not happen until very far into the future’, to ‘AGI will arrive by the end of the decade’.

In the 2020 AGI conference, many people joined the ‘AGI will arrive by the end of the decade’ camp and that was in the ‘slow case’ scenario.

In 2023, the discussion since has gone beyond AGI to ASI (Artificial Super Intelligence) and ‘Singularity’. Ray Kurzweil has designated 2045 the year for a ‘Singularity’, when the non-biological intelligence that humans have created will be a billion or more times more capable than all of the biological intelligence combined.

One of the major problems in this whole discussion is that everyone seems to define AGI and ASI somewhat differently. Some talk about Human Level Intelligence for others it needs to be ‘Superhuman’.

To me it's all about ‘Ascensions vs. Capabilities’. In my eyes it's not the right question to ask if the AI is Sentient or if it's Conscious.

My personal AGI definition is that is it capable of doing superhuman feats that surpass human intelligence.

Just a few years ago logical reasoning, memory access, and multimodality, meaning being able to understand different inputs and outputs other than text, seemed like major challenges that won't be solved anytime soon today we already blew past that

GPT4 is way better at logical reasoning than its predecessor and there are countless multimodal models out there.

AI:

Artificial intelligence (AI) refers to any system that can perform tasks requiring human-like perception, pattern recognition, reasoning, learning, planning, creativity, or problem-solving. AI encompasses a vast range of technologies from machine learning and deep learning to expert systems and robotics.

Current AI systems display narrow intelligence - they can perform specific tasks in particular domains very well, but cannot transfer that ability to other tasks or domains. For example, an AI may become an expert at chess but cannot then directly apply that capability to medical diagnosis.

AGI:

Artificial general intelligence (AGI) refers to AI systems that possess more generalized, flexible intelligence comparable to the broad abilities of the human mind. An AGI could learn a variety of skills and apply knowledge gained from one domain to help solve problems in an entirely different domain.

For example, an AGI could learn to recognize objects, understand natural language, make inferences, solve puzzles, plan schedules, write prose, control robotics, and perform most other tasks a human can, at a similar level.

The key difference versus narrow AI is the ability to transfer learning and skills across a range of environments and problems, rather than being confined to a single specialized domain. However, AGI does not necessarily imply human-like consciousness, emotions, or self-awareness.

ASI:

Artificial superintelligence (ASI) refers to a hypothesized AI system that surpasses the most intelligent humans across all domains of cognitive capability. ASI is not simply an AGI with more speed or memory, but one that radically self-improves and recursively enhances its intelligence to attaining abilities currently inconceivable to humans.

The roadmap from current narrow AI to AGI remains challenging and uncertain. But if AGI is attained, the potential for recursively self-improving systems to lead to superintelligence constitutes an even greater unknown.

ASI could bring about unprecedented technological and societal transformation, while carrying risks if not developed safely and aligned with human values. Understanding the distinction between present narrow AI versus generalized AGI and beyond is critical for charting the responsible path ahead in AI development.

In summary, AI today remains narrow; AGI refers to broader generalization of intelligence; and ASI denotes surpassing current human-level cognitive abilities across all dimensions. Recognizing the current state of the art in AI versus future possibilities is important for setting realistic expectations and priorities for progress.

Last updated