Video Poet (Google)

Google's VideoPoet AI has 6 Generative Abilities

Google in Dec 2023 just unveiled Video Poet, its breakthrough large language model designed for Zero Shot Video Generation with six cutting edge capabilities that push the boundaries of what's possible in video generation.

Google's approach with Video Poet is to seamlessly integrate many video generation capabilities within a single Language Model, rather than relying on separately trained components, resulting in the following six generative abilities.

1. Text to video.

This allows the creation of videos of varying lengths and styles based on textual content. Drawing inspiration from public domain artworks.

Video poet ensures responsible and ethical practices in its content generation.

2. Image to Video.

Video poet can animate static images to produce motion. This feature opens up new possibilities for storytelling and content creation, allowing artists to transform still images into dynamic video sequences.

3. Video Stylization.

Video Poet can overlay text guided styles onto videos, adding a layer of artistic flair to the generated content. The model's video stylization capability is another highlight by predicting optical flow and depth information.

4. Video in Outpainting.

Video Poet also excels in Video 'Inpainting and Outpainting', enabling users to edit videos by adding or removing elements.

This feature is particularly useful for content creators looking to enhance or modify their videos with high precision.

5. Video to Audio. (??? Video to Video and Video to Animation?)

Uniquely, Video Poet is capable of generating audio alongside video. This ability to produce synchronized audio visual content from a single model is a significant advancement, offering a more holistic approach to video generation.

6. Long Video and Editing

video poet is not limited to short clips. It can generate longer videos by extending sequences while maintaining object consistency. Additionally, the model allows for interactive editing of video clips, providing users with extensive control over the content, understanding the trend towards short form content.

Video poet is optimized for portrait orientation, making it ideal for social media and mobile viewing. To demonstrate its capabilities, Google produced a short film using Video Poet based on a script about a traveling raccoon. This film showcases the model's ability to stitch together a narrative from various short clips, but there's a twist in the way video poet works.

The success of Video Poet lies in its use of large language models for training, which allows for the reuse of scalable efficiency improvements from existing LM training infrastructure, despite the challenge of operating on discrete tokens. Video Poet employs video and audio tokenizers to encode clips as sequences of discrete tokens, which can then be converted back into their original forms, and the benchmarks speak for themselves in evaluations. Video poet has shown remarkable results, often being chosen as the preferred option over other models. This success highlights its superiority in both quality and versatility, setting a new standard in the industry. Currently, diffusion based models are the dominant force in video generation AI models. Contrasting this, large language models have gained recognition as the standard thanks to their exceptional learning capabilities across various modalities, including language, coding and audio.

But Google's approach sets itself apart by integrating a wide range of video generation capabilities within a single LM, moving away from the traditional reliance on multiple task specific trained components. This holistic integration not only simplifies the video creation process, but also leverages the full potential of LMS, offering a more efficient and versatile solution in the field of AI driven video generation.

Meanwhile, runway ML just unveiled two innovative features for its newest video generator runways. First new feature, called Text to Speech, integrates synthetic voices into its video editor. This tool offers a diverse range of voice options encompassing various characteristics such as age and gender, including young, mature female, and male voices. Remarkably, this feature is accessible across all runway plans, ensuring wide availability.

The second feature is named the ratio function and serves towards the purposes of user convenience and efficiency with just a single click. Users can transform their videos into different formats, such as the square one by one ratio, or the widescreen 16 by nine ratio. This functionality greatly simplifies the process of tailoring content for various platforms, addressing a common challenge faced by content creators today.

In the near future, the company also aims to develop what it terms world models, which are sophisticated AI systems that are designed to understand. Stand and simulate the visual world. These models are expected to significantly advance AI capabilities, transcending the current limitations. Like something from a science fiction book, the world model is supposed to be able to construct an internal representation of any environment, enabling it to simulate future events within that setting.

The ultimate objective of a general world model is to accurately map and simulate real world scenarios and interactions, a feat that would mark a monumental achievement in AI. But runway acknowledges that current video models such as Gen two represent only the beginning stages of world models. These early versions have begun to grasp basic concepts of physics and motion necessary for video generation.

However, they are still constrained in their understanding of complex camera dynamics and object movements. The company's ongoing challenges include developing models capable of producing consistent environmental maps and realistic human behavior simulations.

This endeavor aligns with the views of leading AI experts, who assert that I must first develop a fundamental world model and a basic understanding of the world to achieve human like intelligence. Importantly, the research being done towards this end is grounded in multimodal training, integrating various data points including text, audio, image and video, which is an approach that is increasingly becoming the standard in AI development as it mirrors the complex, multifaceted nature of human learning and perception.

Overall, runway's introduction of new video features and its pioneering research into general world models represent a significant leap by pushing the boundaries of current technology and aiming to develop systems that can understand and simulate the visual world. Runway ML is not just advancing AI technology, but is also paving the way for a future where AI can interact with and understand the world in a manner akin to human cognition.

As we witness these developments, it's clear that we are entering a new era of AI innovation. This era will crystallize with such AI tools as runways and video poet, meaning the technical barriers to producing high quality, engaging content will be significantly lowered. This will also be a period where video generated by AI and video generated by humans will be indistinguishable from one another for a short period, until I quickly becomes capable of creating vastly superior content. This will be video and audio content that is customized to each

viewer, using generative AI to create deeply immersive virtual reality experiences that hinge around the user's emotions, movements, view, history, preferences, purchases, and more. By 2030, media as we know it will have transformed into something that looks more like an extension of the viewer, rather than the one size fits all presentation that we are currently used to.

With these media experiences likely becoming more realistic than our current reality as we quickly enter into the simulation.

Last updated