Talking Avatars with Stable Diffusion
Stable Diffusion talking Avatar (Wav2Lip, Thin-Plate-Spline-motion model and more).
Talking avatar with Stable Diffusion
AI Toolkit:
Wav2Lip
Thin-Plate-Spline-motion model
Google Colab version
D-ID AI
HitPaw Video Enhancer
Video Link:
Step One: For the two first free versions to work, you need an image that has 512 pixels by 512 pixels or an aspect ratio of 1 to 1.
Step two is to record a video of yourself talking.
Avoid the PITFALL of not aligning your images with your video. The image and video need to be aligned to look good in the Thin-Plate-Spline-motion model.
Crop your image or video inside Photoshop in a square format. Remember to name it "driving.mp4".
The first way to make an animated avatar - replicate.com.
The best alternative to the Thin-Plate-Spline-motion model is the wave2lip model.
Upscale the video with the HitPaw Video Encoder.
Combine your audio and video with Adobe Premiere Pro.
Go to the link in the description below for the Google Colab solution for creating your talking avatar. This is also the Thin-Plate-Spline-motion model.
But what about D-ID? Compare the talking avatars with the one created in D-ID.
Last updated