PETALS

Petals AI is based on the BitTorrent protocol, which is a peer-to-peer file sharing protocol. This means that users can download and upload parts of the LLM model to each other, rather than having to download the entire model from a central server. This makes it much more efficient to run LLMs on Petals AI, as users only need to download the parts of the model that they need.

"Enter Petals, a new decentralized network that is flipping the script on AI capitalism. With Petals, regular people can pool their online computer power to run algorithms like ChatGPT, the world’s largest AI text generator. And anyone can provide hardware to the network – no need for expensive servers or data centers".

Petal is being developed by BigScience, a community project backed by startup Hugging Face with the goal of making text-generating AI widely available. The system called can run AI like ChatGPT but in a decentralized mode by joining resources from people across the internet. With Petals, the code for which was released publicly last month, volunteers can donate their hardware power to tackle a portion of a text-generating workload and team up others to complete larger tasks, similar to Folding@home and other distributed compute setups.

“Petals is an ongoing collaborative project from researchers at Hugging Face, Yandex Research and the University of Washington,” Alexander Borzunov, the lead developer of Petals and a research engineer at Yandex, told TechCrunch in an email interview. “Unlike other LLM's APIs that are typically less flexible, Petals is entirely open source, so researchers may integrate latest text generation and system adaptation methods not yet available in APIs or access the system’s internal states to study its features.”

Petals AI is still under development, but it has the potential to make LLMs more accessible to a wider range of users. This could lead to new applications for LLMs, such as chatbots, machine translation, and natural language generation.

Here are some of the benefits of using PETALS AI:

  • It is more affordable than running LLMs on a single machine.

  • It is more scalable, as users can add more computing resources to the network as needed.

  • It is more reliable, as the network is not dependent on a single server.

Here are some of the limitations of using PETALS AI:

  • It can be slower than running LLMs on a single machine.

  • It requires users to have access to a powerful GPU.

  • It is not yet as widely available as other LLM frameworks.

Overall, PETALS AI is a promising new framework for running large language models. It has the potential to make LLMs more accessible to a wider range of users, and it could lead to new applications for LLMs.

Last updated