Don’t miss OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One leaders only at VentureBeat Transform 2024. Gain essential insights about GenAI and expand your network at this exclusive three day event. Learn More
RunwayML, one of the earliest startups to take up the task of training an AI for video generation, today announced that its latest frontier model, Gen-3 Alpha, has become generally available.
First announced a couple of weeks ago, Gen-3 Alpha allows users to produce hyper-realistic AI videos from text, image or video prompts. With widespread access rolling out today, anyone signed up on the RunwayML platform can use the model’s high fidelity and controllable generations to power a range of creative use cases, including those for advertising — much like what OpenAI has teased with Sora.
However, there’s also a caveat: Gen-3 Alpha is not free like Gen-1 and Gen-2 models. Users will have to upgrade to a paid plan from the company, with prices starting at $12/month per editor, billed yearly.
What to expect from Gen-3 Alpha?
After launching Gen-1 and Gen-2 models within a few months’ gap last year, RunwayML went radio silent on the models’ front and kept pushing feature updates for its platform. During this window, several rivals showcased their offerings, including Stability AI, OpenAI Pika and most recently Luma Labs.
Countdown to VB Transform 2024
Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now
As the AI video wars picked up pace, the startup decided to reemerge last month with Gen-3 Alpha. The model, trained on videos and images annotated with highly descriptive captions, allows users to produce hyper-realistic video clips featuring imaginative transitions, precise key-framing of elements and expressive human characters displaying a wide range of actions, gestures and emotions.
The samples shared by Runway show how big an upgrade Gen-3 Alpha is over Gen-1 and Gen-2 in terms of speed, fidelity, consistency and motion. The company said it partnered with a “cross-disciplinary team of research scientists, engineers and artists” to develop these capabilities — but did not share much about the exact source of training data.
Now, with the widespread availability of Gen-3 Alpha, users can upgrade to Runway’s paid plan and start using it for producing videos for different creative use cases.
In the initial phase, Runway says the model will power its text-to-video mode, allowing users to bring their ideas to life with natural language prompts. However, in the coming days, it is expected the company will take it to other modes, including image-to-video and video-to-video. The model will also work with the control features on the Runway platform like Motion Brush, Advanced Camera Controls and Director Mode.
Each video generated with a model will go up to 10 seconds, max with generation speed varying according to the length of the video. This is better than most AI video models, but less than OpenAI’s one-minute-long generations promised with Sora (which is yet to be released).
It will be interesting to see how the creative community puts Gen-3 Alpha to use. Emad Mostaque, the former CEO of Stability AI, already took it for a spin to see how its generations fare against those from Sora.
That said, it is important to note that this is just the beginning. Gen-3 Alpha is expected to improve in the coming months, with a free version of the model also making its way to users at some point.
Runway describes it as the “first of an upcoming series of models” trained on a new infrastructure built for large-scale multimodal training and “a step towards building General World Models” that can “represent and simulate a wide range of situations and interactions, like those encountered in the real world.”