ChainThink reports that on April 10, Alibaba officially confirmed the video generation model HappyHorse-1.0 as its proprietary product. The model was developed by the former Future Life Lab team under Taotian Group, which has been reassigned to the "AI Innovation Division" under the newly established Alibaba Token Hub (ATH) business group as part of Alibaba's latest organizational restructuring.
In anonymous voting on the third-party evaluation platform Artificial Analysis, HappyHorse-1.0 significantly outperformed ByteDance’s Seedance 2.0 and Kuaishou’s Kling 3.0 in pure video generation tasks, while performing equally to Seedance 2.0 in audio-visual generation.
According to individuals close to Alibaba, HappyHorse-1.0 is just one of several multimodal models developed in-house by the team; Alibaba is set to launch another distinct multimodal model soon. Currently, HappyHorse-1.0 is not open-sourced, consistent with Alibaba’s broader recent shift toward closed-source models—all newly released models since the end of March have remained proprietary.
The aggressive advancement of this multimodal model follows the surprising performance of ByteDance’s Seedance 2.0 during the 2026 Spring Festival, which caught Alibaba internally off guard. Additionally, multimodal generation will significantly increase token consumption, thereby impacting market share in the MaaS (Model-as-a-Service) sector—according to IDC data, Volcano Engine held 49.2% of this market in the first half of 2025, while Alibaba Cloud accounted for only 27%.
