Create Animated Video from a Single Image

Stable Diffusion (https://clipdrop.co/stable-diffusion) and Runway ML-Gen2

Ai Video Generator For Image to Animation

Took-Kit:

  1. Runway ML-Gen2

https://www.youtube.com/watch?v=4ASNgnw1fss

1. First of all generate some images to use as an input.

2. Next, convert these images into videos

3. Finally, you can extend the length of your videos

For creating images use the stable diffusion's newest model 1.0

(https://clipdrop.co/stable-diffusion)

Let's say you want to generate cartoon animations. So enter the following prompt;

“a surprise cartoon image in a Disney style”

  • Set the Aspect Ratio to 16x9

  • Set Style to ‘photographic style’

If you don't use the photographic style, you're going to end up with comic results or something like that.

Next run the prompt and select an image as the input character.

Once you have your input character, it's time to turn your images into videos using Runway ML-Gen 2.

Runway ML-Gen 2 has an option to upload your images as an input. Upload your image that you recently generated. To get the best results, leave the prompt area blank and click on ‘Generate’.

It should take less than a minute to generate your video. When it’s ready, play the Video and see the results. The video will be only for four seconds long video but you can extend it following the next steps.

To extend the video, go to the very last frame of your video and then take a screenshot of that. Upload that screenshot image back into Runway ML- Gen 2 and repeat the process to run generate the Video. (Do not use any prompts – run the he same process without any prompt.)

Keep repeating the process with each Videos ‘Last Frame’. This was you will get a whole bunch of 4 second videos that you can eventually combine in any Video editor and get one long Animated Video.

One problem with this is when you will take more screenshots it will start downgrading the video quality as you can see in the very last video. But otherwise, it works really well.

Last updated