The evolution of AI is changing with each passing day. Many operations that were only imagined before can be done using AI. What are the magical applications of the latest Stable Diffusion, ControlNet, and EBsynth? How to replace characters and scenes in a video with one click, and how to generate a fantasy video based on a text description? We have sorted out some of the latest and strongest tools collected into
The recording effect is as follows
Click the plus sign in the upper right corner to save all URLs to my favorites with one click, and you can manage and share all the URLs you want to favorite on the Internet completely free of charge.
Due to platform restrictions, it is impossible to add access links to each tool in the detailed introduction, and all of them have been included in the previous collection.
Wonder Studio
Replace people in videos with arbitrary models, especially useful for movie special effects. You can use the officially provided models, and also support your own models. The operation is simple, the effect is good, and the effect is perfect when the movement range is not large.
Runway GEN2
The video effect that can be achieved is very unstable, but it generates video through text, completely from scratch, which can be said to be the most cutting-edge AI video technology. Currently there are many modes
(1) Generate video directly from text
(2) Provide a photo, and then add a text description, as shown in the picture below, which can also generate a good video effect
(3) Generate video from photos
(4) Photo stylization, this technology is very mature, but now it can generate some dreamy effects.
(5) STORYBOARDS. Given a template, let AI imagine it for you
(7) Pure rendering: Render the model into a real video.
Stable Diffusion+ControlNet+EBsynth
AI video method with high limit but extremely complex operation. There are no step-by-step Chinese tutorials yet. The essence of generating video from video is frame-by-frame replacement.
AI generates actions as input. Currently this is a paper and the source code has not yet been released.
Blender+Stable Diffusion generates character action video
Video to video with Stable Diffusion (step-by-step) - Stable Diffusion Art is a very complete set of Stable Diffusion generated video tutorials, and shows the video effects generated by different processes. include:
- ControlNet-M2M script
- ControlNet img2img
- Mov2mov extension
- EbSynth
- Down below
- Stable WarpFusion
Stable Diffusion+Deforum
The lens is controlled by Deforum, and the SD generates pictures to realize the constantly evolving video, and the effect is very cool.
Forward AI
Record scene video through mobile phone, combine NeRF technology to generate 3D scene video, support viewing and zooming from any angle
Github f2-nerf
Also use NeRF technology to generate unlimited scene videos
D-ID
Upload pictures + audio to generate a virtual human broadcast video. The operation is very simple, and the effect is not bad, but the mouth shape does not match. At present, some digital broadcasters are so neat
Hey Gen
The effect is similar to D-ID, and HeyGen has a ChatGPT plug-in, plus members can directly generate videos in ChatGPT.
Stable Diffusion+infinite zoom
An open source Github that uses Stable Diffusion to realize infinite zoom in and zoom out photos
Kaiber image generation video
Upload a picture or generate a picture through prompt words, and then draw a slightly changed picture based on this picture to form a video
整理自 Notion – The all-in-one workspace for your notes, tasks, wikis, and databases.