Like a thunderclap, Runway announced on July 24 that Gen-1 and Gen-2 have been fully opened, and anyone can register for an account to try for free. The length of the generated video is 4 seconds, 5 credits are consumed per second, and twenty-six videos can be generated with the free credit. If the free credits are exhausted, the payment standard is $0.01/point, that is, it takes $0.2 to generate a video
The author has experienced Gen-2 for a while, and the videos generated by it can easily achieve cool special effects for ordinary people without using professional software such as AE and Blender. But from a professional perspective, there are problems such as blur, heavy graininess, poor light penetration, unstable frame rate, and weird movements of video animals/characters.
What is Gen-2
Gen-2 is the latest release of Runway in March this year. It can directly generate video through text, pictures, text+pictures, and supports stylization and rendering to add Hollywood blockbuster special effects. It only takes a few minutes to complete all operate.
According to Runway, Gen-2 uses a diffusion model, a process that gradually removes noise from a starting image made entirely of noise to approximate the user's text prompt. The training data for Gen-2 includes 240 million images, 6.4 million video clips, and hundreds of millions of learning examples.
How to use Gen-2
At present, Runway has opened a free trial window on the web page, and the related application (RunwayML) has also been launched in the Apple App Store. URL: https://research.runwayml.com/gen2
manual:
1) Select [Try Runway for Free] at https://research.runwayml.com/gen2/ to register and log in to the Runway platform for free.
2) Select Gen-2 on the main page.
3) Enter text to make, currently Gen-2 is limited to 320 characters. For example, a futuristic utopian age with curved alien-like buildings built on a rocky alien landscape, 2300, cinematic style, cinematography, shallow depth of field, focus on subject, beautiful.
4) Directly generate online preview of the video, or download it locally.
A sentence, a picture, and a three-second video are created out of nothing
Gen-2 is feature-rich, including stylization, storyboarding, masking, rendering, customization, and more.
Stylization can be understood as modifying the video style with reference to an image. For example, given the following original video:
Given another reference image: Gen-1 can edit the video into the style of the above image: Storyboard is a term in film science, which refers to converting the text description of the script into frame by frame before the actual shooting or drawing of the film A picture that narrates the progression of a story. Gen-1 can transform a storyboard-like video into a scene-specific video.
A mask can be understood as modifying a specified part of a video while leaving other parts unchanged. For example, given the following original video: Then input the text command to Gen-1 "dog with black spots on white fur." We can get the edited video. Rendering refers to the conversion of a computer-generated 3D scene or special effects image into a final image. For example, given the following original video: Rendered video can be generated: In addition, Gen-2 also supports custom video editing: at the same time, it also adds text and image generation video functions. That is to say, only need to enter the description of text, image or text plus image, and Gen-2 can generate relevant video in a very short time. It is the first publicly available text-to-video model in the market.
For example, we enter a piece of plain text: "The afternoon sun shines through the windows of the attic in New York." Gen-2 will directly "brain fill" out the video: input a photo + text "Low-angle shot: a man walking on the street up, illuminated by the neon lights of the bars around him." Gen-2 returns the following result: just an image as input, which Gen-2 can also expand into a video: