Technology used by Runway

The core generative video technology used by RunwayML is based on GANs (Generative Adversarial Networks):

  • They use a GAN architecture called VGAN for generating high-resolution photorealistic videos from noise vectors.

  • VGAN utilizes spatio-temporal convolutional networks for generating sequential video frames in a generative manner.

  • For video editing, they use a GAN model called Recycle-GAN which can synthesize novel video content by blending and interpolating input video clips.

  • Recycle-GAN employs space-time memory networks and attention-driven architectures for processing video data.

  • Users can control aspects like motion, content style, camera angle etc. for generation and editing via intuitive interfaces without needing coding experience.

So in summary, the primary generative technology behind RunwayML is video-focused GAN models like VGAN and Recycle-GAN that enable creative video generation and editing through an accessible web platform. The focus is on catering to artists and designers.

Last updated