Este video fue creado por Open AI usando Sora, su último modelo, que puede crear videos usando prompts de texto pero, también, crear videos usando videos como base para crear variaciones en estilo, cinematografía y cualquier otra variable que puedas elegir o crear simplemente describiendo el cambio que querés.
We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.
Research: Video generation models as world simulators