Adobe is making the jump into generative AI video. The company’s Firefly model launched on October 14th of 2024 with a handful of new tools that include some right inside Premiere Pro that enable creatives to extend footage and generate video from still images and text prompts
We have already mentioned there are several tools, the first tool is ‘Generative Extend’, which has launched in beta for Premiere Pro. This tool can be utilized for extending the end or beginning of the footage, in case the footage is too short, or to make any adjustments mid-shot, like to correct shifting eye-lines or unexpected moments.
Note that, clips can extended only up to two seconds, so, this can be employed only for small tweaks. This tool can be effectively used to correct any small issues instead of retaking the footage. Using this tool, extended clips can be generated to 720p or 1080p at 24FPS. This feature can be used on audio to help smooth out edits. It can potentially extend sound effects and surrounding ‘room tone’ by up to ten seconds.
Another two video-generating tools available on the web are Adobe’s Text-to-Video and Image-to-Video tools. This feature was announced in September, and now performing as a limited public beta in the Firefly web app.
The text-to-video feature is just like other video generators such as OpenAI’s Sora and Runway. Here users only have to plug in a text description to generate what they want. It can reproduce a variety of styles such as regular ‘real’ film, 3D animation, and stop motion, and the generated clips can be refined again using a selection of ‘camera controls’ that simulate things such as camera angle, shooting distance, and motion.
The other tool image-to-video is used to add a reference image along with a text prompt to give additional control over the results. Adobe has recommended that this can be used to make a b-roll from photographs and images, otherwise help visualize reshoots by uploading a still from existing videos. You cannot replace reshoots directly with this, still, you can remove certain errors such as wobbling cables and shifting backgrounds.
You won’t be able to make entire movies with this tech any time soon, either. The maximum length of Text-to-Video and Image-to-Video clips is currently five seconds, and the quality peaks at 720p and 24 frames per second. In a comparison, OpenAI has declared that Sora can generate videos that are up to a minute long by maintaining visual quality and sticking to the user’s prompt. However, this is not available to the public yet even though its launch is announced several months before Adobe’s tools.
All the above-mentioned tools take almost 90 seconds to generate, the Adobe is working on ‘turbo mode’ to reduce that. Adobe promises that all the tools are developed by AI video models which is commercially safe, and they are trained by content that is permitted by the creative software giant.
The major advantage of creating or editing a video using Adobe is that, it can be embedded with Content Credentials that will help to disclose AI usage and ownership rights when it is published online. The AI video launch was announced on October 14, 2024, at Adobe’s MAX conference, along with that the company is also introducing several other AI-powered features over its creative apps.