image-to-video
POST
/v2alpha/generation/image-to-videoGenerate a short video based on an initial image with Stable Video Diffusion,
a latent video diffusion model.
How to generate a video:
Video generations are asynchronous, so after starting a generation use the id
returned in the response to poll /v2alpha/generation/image-to-video/result/{id} for results.
Request
The source image used in the video generation process.
Please ensure that the source image is in the correct format and dimensions.
Supported Formats:
- jpeg
- png
Supported Dimensions:
- 1024x576
- 576x1024
- 768x768
A specific value that is used to guide the 'randomness' of the generation. (Omit this parameter or pass 0
to use a random seed.)
How strongly the video sticks to the original image. Use lower values to allow the model more freedom to make changes and higher values to correct motion distortions.
Lower values generally result in less motion in the output video, while higher values generally result in more motion. This parameter corresponds to the motion_bucket_id parameter from the paper.