ME News reports that on April 16 (UTC+8), according to monitoring by Beating, the AI model aggregation platform OpenRouter has officially launched its video generation API, initially supporting text-to-video and image-to-video generation, with integration of Seedance 2.0/1.5, Veo 3.1, Wan 2.7/2.6, and Sora 2 Pro, with further models to be added later. The fragmentation of APIs in the video generation space is far more severe than in text models: each provider uses different request formats, parameter naming conventions, billing units, and often even different endpoints for different capabilities within the same model family—such as text-to-video, image-to-video, and reference-character generation. OpenRouter addresses this by introducing a unified schema at the upper layer that automatically routes requests to the correct endpoint based on input parameters: if an image is provided, it routes to the image-to-video endpoint; if a reference character is specified, it routes to the character-consistency endpoint—eliminating the need for developers to manage underlying differences. Parameter normalization also covers common pitfalls. For example, Veo 3.1 supports clip durations of 4, 6, or 8 seconds, while Wan 2.6 supports only 5 or 10 seconds—submitting an unsupported duration results in an error. OpenRouter provides a model capability lookup endpoint, `/api/v1/videos/models`, which returns supported resolutions, durations, aspect ratios, pricing, and model-specific parameters for each model. Developers or programmatic agents can query this endpoint once before making requests to avoid trial-and-error. Since video generation typically takes minutes to complete, the API operates asynchronously: upon submitting a prompt, it returns a task ID, and the video is retrieved once generation is complete. OpenRouter has also open-sourced a multimodal workflow demo application that demonstrates a sequential pipeline: an LLM generates detailed prompts, an image model creates characters, and a video model generates scenes. This represents the most direct value of unifying video generation under a single routing system: developers can now combine text, image, and video models within a single API without needing to integrate separate SDKs from each provider. (Source: BlockBeats)
OpenRouter Launches Video Generation API, Integrating Sora 2, Veo 3.1, and Seedance
KuCoinFlashShare






OpenRouter, a leading player in the AI model aggregation space, announced the launch of its video generation API on April 16 (UTC+8), marking a significant development in AI + crypto news. The API supports both text-to-video and image-to-video generation, integrating Seedance 2.0/1.5, Veo 3.1, Wan 2.7/2.6, and Sora 2 Pro. The platform employs a unified schema to streamline access, automatically routing requests based on input. Developers can review model capabilities at `/api/v1/videos/models`. Processing is asynchronous due to the time-intensive nature of video generation. OpenRouter also released a demo app for integrating LLMs, image, and video models, enhancing the value of on-chain news updates.
Source:Show original
Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information.
Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.