Yandex has presented the beta version of its new YandexART (Vi) neural network, which can now create short videos with moving objects, up to five seconds long. Unlike previous versions that created animations with a fixed camera, the new model is capable of recreating realistic and smooth movements of objects, such as a dog running, a leaf falling, or fireworks.
YandexART (Vi) allows users to create unique animated intros and video content. It will also be useful for bloggers, animators, and content creators. The model is already available in the Shedevrum application.
Previously presented versions of YandexART generated animations in which only the background or camera moved, and objects changed significantly between frames. The new version overcomes these limitations, creating more cohesive and realistic videos by learning from examples of real movements, such as a passing car or a moving cat.
The process of creating a video begins with a text description from the user, for example, "A rhinoceros dancing hip-hop in a twilight forest." Based on this description and the initial image, the neural network gradually forms a sequence of frames, turning digital noise into a smooth video.
Read more on the topic:
Yandex's neural networks are seven times faster at finding fraudsters
Yandex has created a neural network to detect rare pathologies in early pregnancy