You can only make deformed time-continuation videos, such as flowers blooming, ice melting, etc., and you can learn and apply the physical laws of the real world.
Code implementation has been released.
Detailed introduction:
First, a set of technology called MagicAdapter was developed. By processing spatial and temporal training separately, it can extract more physical knowledge from deformed videos and enable pre-trained T2V models to generate such videos.
Then, a dynamic frame extraction strategy was introduced, which is particularly suitable for deformed time-continuation videos, because such videos have a wide range of changes and cover the dramatic change process of objects, thus including richer physical knowledge.
A tool called Magic Text-Encoder has also been designed to improve understanding of such morphing video prompts.
In addition, a time-extending video-text dataset called ChronoMagic was specially created to improve the ability to generate morphing videos.
If you want to learn more, you can click on the link below the video.
Thank you for watching this video. If you like it, please subscribe and like it. thank
Github:https://github.com/PKU-YuanGroup/MagicTime? tab=readme-ov-file
Video: