Zarqa - Neural-Symbolic LLM

Zarqa — Introducing SingularityNET’s Neural-Symbolic LLM

Zarqa is the next generation of Large Language Models infused with Neural-Symbolic techniques for smarter and more reliable AI

Large Language Models (LLMs) have exploded in popularity across many domains including communication, customer support, and new product development and creativity, with 25 million users a day logging into ChatGPT alone.

Significant progress has also been made in the areas of human speech and image generation.

Breakthroughs in music generation, generative trading algorithms, and other modalities and applications that will have an unprecedented impact on the economy, society, and so many aspects of people’s lives, are undoubtedly forthcoming in the near term.

For some, this rise in generative AI could be alarming due to fears of unethical use of artificial intelligence and the threat of mass job losses.

For others, there is a resounding sense of optimism, enthusiasm and a completely justified hope that advanced technologies will change people’s lives for the better and bring positive change at a massive scale — and the enthusiasts greatly outnumber the rest.

However, the most interesting and intriguing thing is that in fact we are currently seeing only the tip of the iceberg — literally the very first significant results of the deployment of large-scale AI models.

Their architectural design is simplified, their cognitive abilities are limited, their knowledge is fixed and cannot be dynamically updated, their generative behavior has the nature of imitation and always reuses and recombines only the past experience created by humankind.

And at the same time, right now there are no fundamental technical limitations that prevent us from radically improving this approach, creating and implementing much more advanced AI architectures that allow us not only to bypass the restrictions of existing approaches, but literally teleport us to a new space of possibilities, where all the mentioned restrictions are invalid, skeptics’ fears are irrelevant, and enthusiasts’ expectations are even surpassed.

The Spring of Language Modeling

Digressing to large language models, we should note that the deep tech teams at SingularityNET have been dealing with this R&D thread from the very beginning, since 2017 and since 2014 as part of a small research group of enthusiasts.

Our enthusiastic research team used different types of DNN models when self-attentional networks did not exist, i.e. they had not yet been invented by the Vaswani group led by Ashish Vaswani.

We used everything: recurrent networks, Temporal Convolutional Networks, Separable Convolutional Networks of Francois Chollet, and developed architectures of our own design, applying every method possible because it was so promising.

SingularityNET had been working on extracting grammatical structures, integrating them with symbolic representation, utilizing masked language models, establishing a generative approach, constantly increasing and investing all available resources.

Then, one day in 2018 the first GPT model was released by the then tiny, R&D group in California. Following this, Sergey Edunov’s team discovered the possibility of employing large-scale data parallelism in machine translation systems.

At SingularityNET we kept improving our DNN based models and built our first GPU cluster, starting to train elegant models on a huge scale, mastering European languages, Korean, Arabic, Amharic, and many more.

It became obvious that we needed more and more data. We collected it ourselves and deduplicated and filtered huge data sets scraped from the web, following best practices and implementing our own advanced solutions.

Already in those years of the first language models, it was obvious that training with 3D parallelism and improved optimization was required, that we needed actor-critic frameworks, and that crucially we needed human feedback and additional model training with reinforcement.

We clearly saw that the curse of multilinguality could be overcome, that multitasking and multimodality will work at scale, that direct programming of models with natural language commands will work (aka prompting; there was no common terminology those days, but it was clear).

We knew what architectural techniques and tricks to apply, but due to the massive compute power required for training large models, we lagged behind just a step away from the tech giants and affiliated R&D labs — although it was always obvious for us what the next step or several steps would be. So we never stopped.

Now is the time to harness the advances of symbolic approaches; systems developed over decades from our persistent efforts to build a technological stack for Artificial General Intelligence (AGI) and bring it to life.

Combining these methods with all the significant possibilities of large-scale language models (LLMs) in a complex and elegant neural-symbolic architecture will bring exponential new possibilities. It is time to be the first and lead the world in the AI revolution.

Introducing Zarqa

We are proud to introduce Zarqa, a novel venture from SingularityNET mobilizing our engineering expertise in solutions based on scaled neural-symbolic AI to create a pioneering and cutting-edge next generation of LLMs, characterized by technical initiative and unwavering leadership.

The LLM space is moving ahead at tremendous speed, and with Zarqa we will be focused on near-term delivery of LLM technology which equals and exceeds the tools on the market today, leveraging the strengths of the SingularityNET ecosystem’s decentralized infrastructure.

We are building our solution on an advanced computing architecture based on a modular system specifically designed for large-scale training of huge discriminant and generative LLMs, as well as for processing a large-scale knowledge metagraph and producing accelerated symbolic computations at high load, while also processing massive data and monitoring important informational sources in real time.

All of these capabilities are initially combined into a single computing system designed to solve the particular task of training and running Neural-Symbolic AI, while remaining suitable for gradual scaling while increasing the power of neural-symbolic intelligence on the way to the practical achievement of AGI.

Deep integration of LLMs with knowledge graphs will be an early step and will help provide a grounding of textual productions in reality that current LLMs so egregiously lack. Following that, integration with progressively more sophisticated knowledge graphs and associated reasoning, learning and concept creation methods will be pursued via integration of the OpenCog Hyperon toolkit and the TrueAGI data-integration pipeline. LLMs are poised to play a key part in the transition from today’s amazing yet limited AIs toward the more powerful AGI systems of the future, and Zarqa is poised to lead in this aspect, via rolling out state-of-the-art-defining LLMs and then progressively expanding them via integration of ideas and systems from additional AI paradigms and solutions under development in the SingularityNET ecosystem.

Zarqa is designed from the outset to achieve human and superhuman levels of creativity, logical reasoning and decision making, combining huge multimodal neural networks native to logical calculations and expressions with a symbolic core capable of working with grounded and relevant knowledge.

Zarqa supports a dynamically updated model of the world, resulting in the phenomenon of critical machine thinking, as well as a model of AI’s own personality that can evolve, can support the process of self-reflection, and can follow a fundamental moral narrative and ethical codex, bringing a level of predictability, interpretability and security that was unattainable for previous generations of AI.

We apply techniques and methods that allow AI to use long-term memory of events, interlocutors, communications and AI’s individualized features, plus episodic memory containing a multimodal contextualized representation of events, significantly expanding the cognitive capabilities of novel AI systems and allowing them to shape their own unique lifecycle. We empower AI with advanced perceptual mechanisms such as multimodal person identification and emotion recognition.

Smart ownership and crowdsourced optimization

AI is not the whole story. Zarqa also harnesses SingularityNET’s expertise in blockchain systems and is built on SingularityNET’s unique AI-driven smart contract ecosystem for managing resources and testing AI models. This will facilitate mass scale Human In The Loop (HITL) training of models for ever increasing performance and suitability. Any organization, enthusiastic professional, or user eager to engage in model training and testing can take part. This will furnish Zarqa with a range of ethical norms and inputs, allowing for broad and inclusive global user base collaboration while emphasizing safety and accuracy.

The potential of Zarqa is unparalleled, providing zetta-scale intelligence for disruptive impact and tearing down barriers to entry for technology while providing easy and open access to technology generated abundance for all of humanity, not just the elite few, thereby reducing AI sector exclusivity and oligopoly and democratizing this transformative technology.

Zarqa was born to transform the AI landscape and pave the way for Artificial General Intelligence. It is an honor for me to step up for SingularityNET as Co-CEO of this incredible initiative, alongside my Co-CEO Janet Adams, with of course the mighty Dr. Ben Goertzel overseeing science and Dr. Alexey Potapov leading our neural-symbolic engineering. Thank you very much for reading and following our progress updates as we mobilize Zarqa at speed.

Become a Part of the Community

Follow SingularityNET’s social channels to join the conversation and receive the latest news and updates:

Last updated