Nvidia Launches AI Model for Autonomous Driving at NeurIPS Conference

iconForklog
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy

As per Forklog, Nvidia announced Alpamayo-R1, an open-source visual reasoning language model for autonomous driving, at the NeurIPS AI conference in San Diego. The model, based on the Cosmos-Reason framework, enables vehicles to process text and images to make driving decisions. Nvidia highlighted that previous autonomous driving models struggled in complex scenarios, such as multi-lane intersections or double-parked vehicles. Alpamayo-R1 aims to provide autonomous vehicles with human-like common sense for safer navigation. The model is available on GitHub and Hugging Face, with supporting resources under the Cosmos Cookbook. Nvidia also showcased other Cosmos-based solutions, including LidarGen and ProtoMotions3, and emphasized its push into physical AI and robotics, including the Jetson AGX Thor module.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.