Replicate.com
We’re making machine learning as easy to use as software.
Replicate runs machine learning models in the cloud. Replicate has a library of open-source models that you can run with a few lines of code. If you're building your own machine learning models, Replicate makes it easy to deploy them at scale.
Replicate is an amazing site with scores of models - such as;
Generate music from a prompt or melody (facebookresearch/musicgen).
Audio generation: Models to generate and modify audio.
Image restoration: Models that improve or restore images by deblurring, colorization, and removing noise.
Image editing: Tools for manipulating images. Image Editing.
Image to text: Models that generate text prompts and captions from images.
Language models: Models that can understand and generate text.
Diffusion models: Image and video generation models trained with diffusion processes.
ML Makeovers: Models that let you change facial features.
Style transfer: Models that take a content image and a style reference to produce a new image.
Super resolution: Upscaling models that create high-quality images from low-quality images.
Video Generation: Models that create and edit videos.
anotherjesse/zeroscope-v2-xlZeroscope V2 XL & 576w.
stability-ai/stable-diffusion: A latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
stability-ai/sdxl: A text-to-image generative AI model that creates beautiful 1024x1024 images.
tencentarc/gfpgan: Practical face restoration algorithm for old photos or AI-generated faces.
prompthero/openjourney: Stable Diffusion fine tuned on Midjourney v4 images.
openai/whisper: Convert speech in audio to text.
logerzhu/ad-inpaint: Product advertising image generator.
a16z-infra/llama-2-7b-chat: A 7 billion parameter language model from Meta, fine tuned for chat completions.
a16z-infra/llama-2-13b-chat: A 13 billion parameter language model from Meta, fine tuned for chat completions.
replicate/llama-2-70b-chat: A 70 billion parameter language model from Meta, fine tuned for chat completions.
lucataco/animate-diff: Animate Your Personalized Text-to-Image Diffusion Models.
lucataco/gfpgan: Practical face restoration algorithm for old photos or AI-generated faces (for larger images).
mingcv/bread: The online demo of Bread (Low-light Image Enhancement via Breaking Down the Darkness). This demo is developed to enhance images with poor/irregular illumination and annoying noises.
fofr/image-prompts: Generate image prompts for Midjourney. Prefix inputs with "Image".
replit/replit-code-v1-3b: Generate code with Replit's replit-code-v1-3b large language model.
Deployment of Realistic vision v5.0 with xformers for fast inference
lucataco/realistic-vision-v5.1
Implementation of Realistic Vision v5.1
Efficient Diffusion Model for Image Super-resolution by Residual Shifting
abdullahmakhdoom/diffusers-txtnimg2img
A diffusion model that changes an input image according to provided prompt
Machine learning can now do some extraordinary things, but it’s still hard to use. You spend all day battling with messy Python scripts, broken Colab notebooks, perplexing CUDA errors, misshapen tensors. It’s a mess.
The reason machine learning is so hard to use is not because it’s inherently hard. We just don’t have good tools and abstractions yet.
We’re making machine learning accessible to all software engineers. You should be able to import an audio transcriber the same way you an import an npm package. You should be able to fine-tune GPT as easily as you can fork something on GitHub. Machine learning needs better tools.
Last updated