# AniPortrait

## AniPortrait: Audio-Driven Synthesis of Photorealistic Portrait Animation

{% embed url="<https://huggingface.co/spaces/ZJYang/AniPortrait_official>" %}
&#x20;
{% endembed %}

## AniPortait: Vid2Vid (Video Driven Synthesis of Portrait Animation

{% embed url="<https://replicate.com/camenduru/aniportrait-vid2vid>" %}
<https://replicate.com/camenduru/aniportrait-vid2vid>   &#x20;
{% endembed %}

AniPortrait is a novel framework for generating high-quality animation driven by audio and a reference portrait image.&#x20;

The methodology is divided into two stages. Initially, they extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks.&#x20;

Subsequently, they employ a robust diffusion model, coupled with a motion module, to convert the landmark sequence into photorealistic and temporally consistent portrait animation.&#x20;

Experimental results demonstrate the superiority of AniPortrait in terms of facial naturalness, pose diversity, and visual quality, thereby offering an enhanced perceptual experience.&#x20;

Moreover, our methodology exhibits considerable potential in terms of flexibility and controllability, which can be effectively applied in areas such as facial motion editing or face reenactment.&#x20;

THE  Release code and model weights are at [this https URL](https://github.com/scutzzj/AniPortrait)

{% embed url="<https://github.com/scutzzj/AniPortrait>" %}

{% embed url="<https://huggingface.co/ZJYang/AniPortrait/tree/main>" %}
<https://huggingface.co/ZJYang/AniPortrait/tree/main>   &#x20;
{% endembed %}
