Assistive Video Review
I just tried the new Assistive AI video tool — and its realism is incredible Artificial intelligence startup Assistive has launched a new generative video platform called Assistive Video that can create four-second clips from text or images.
This is the latest in a growing line-up of AI video tools, joining the likes of Runway, Pika Labs, Leonardo and StabilityAI’s Stable Video Diffusion. The field is improving rapidly, with different underlying models showing strengths in different areas.
Assistive says it is particularly focused on improving photorealism within its model, which it is improving all the time. The current version is an early alpha release but is better than I expected, particularly around visual appeal.
The company says the goal is to help users to create the most natural and realistic-looking clips possible from both text and image prompts.
How well does Assistive Video work?
To find out how good it is I decided to try it out. Assistive Video works much like any of the other AI video tools, it is a simple prompt, some basic motion options and other settings. You then click Imagine and wait to see what it creates.
Testing any artificial intelligence tool involves a degree of luck, some prompt consistency across different models as well as a bit of trial and error. As Assistive prides itself on photorealism, I picked subjects that would put that to the test.
The rocket launch test Assistive Video
(Image credit: Assistive Video) The first prompt was something the models all tend to struggle with without additional settings — a rocket launch. I opted for a text-to-video prompt and to leave all other settings on default.
I asked Assistive Video to: “Create a photorealistic rocket launch from a coastal spaceport at dawn. Show smoke billowing under the rocket as it lifts off the ground.”
With the first prompt, I’m giving it a detailed and descriptive outline of both the visuals and expected motion within the video. It is a good way to test its understanding of complex ideas and something AI video models often struggle with — animating the lift from the ground.
It created a stunning video. It wasn’t exactly what I had in my head, as I assumed a video where the rocket was sat on the ground and slowly lifting — but it got the billowing smoke and we can see the rocket lifting and moving slowly off screen.
Walking down the street Assistive Video
(Image credit: Assistive Video) For the second prompt, I wanted to see how well it handled someone strolling down the street. This is a type of motion most of the AI video tools struggle with, often sending the person walking backward or causing cars to move in reverse.
I opted to keep it simple. I gave it the text prompt: “A woman walking down the street, facing away from the camera, going off into the distance on Main Street in a small town.”
The resulting output was reminiscent of 80s home camcorder footage. Not only did it manage the woman walking better than I expected, it also added an element of shaky camera movement, making it feel even more realistic.
While the motion was impressive, there was a degree of unreality in the visuals due to blockiness on the buildings verging on early 3D animation — but this could be forgiven due to the impressive motion.
The water droplet test Assistive Video
(Image credit: Assistive Video) Water is a great tool for testing motion, as well as realism within a video. Ripples can look stunning, the splash-down of a droplet can create varied and complex visuals.
The prompt was: “A photorealistic close-up of a water droplet falling into a still pond, creating ripples." I used average settings for both motion level and adherence to the text.
The resulting video is stunning and hypnotic. It shows water slowly rippling as a stream of droplets falls to the surface. It captures shadow and light perfectly.
Hummingbird in flight Image of a humminbird
(Image credit: Leonardo/Ryan Morrison) Next was the first image-to-video test. I used Leonardo to generate a bright, colorful image of a hummingbird with flowers. The hope is that Assistive Video will animate the wings or motion around the bird as it hovers near the flower.
There is no text prompt as it takes all of its data for the motion from the image. Some models offer a combination input, with both text and image combined to help describe how to interpret the image prompt. That may come in future releases, this is an alpha version of the tool.
The final video captures the beauty of both the humminbird and flowers produced in the original image. It has camera motion and a degree of depth to the video.
Even the initial motion of the hummingbird is well down but in the second half it suffers the same issue many AI video tools suffer — the bird merges in on itself rather than gently moves forward into the flower.
An aurora on full display AI generated video showing an aurora
(Image credit: Assistive Video) For the last two images I used Leonardo's prompt generation tool. You give it a rough indication and it writes the prompt for you. In this case, it generated an image of a green aurora.
The final image prompt was: “Imagine a serene night sky, dotted with twinkling stars and a faint hint of green aurora, as if nature itself is putting on a magical light show just for you.”
The image was a beautiful frozen Arctic lake with a stunning view of the Milky Way galaxy and a light green glow surrounding the hills beyond the lake.
This result may be my favorite of the test. It seemed to enhance the stunning visual from the original image, adding a degree of realism and creating the aurora like a sweeping cloud and light show in the sky.
Final thoughts on Assistive Video Overall Assistive Video is a useful addition to the generative video ranks. It suffers from many of the same issues around motion, particularly involving people, as the others but does bring an interesting degree of photorealism to its output.
Last updated