AI Agents Can Now Exploit DeFi Contracts, Study Finds

icon币界网
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy

Citing Bitjie.com, a study by the Anthropologist Research Program and the Machine Learning Alignment and Theory Scholars Program (MATS) reveals that advanced AI models can now identify and exploit vulnerabilities in DeFi smart contracts. Using the SCONE-bench dataset of 405 hacked contracts, the research found that models like GPT-5, Claude Opus 4.5, and Sonnet 4.5 successfully simulated attacks, extracting $4.6 million in total. The AI agents not only identified vulnerabilities but also generated attack scripts and simulated liquidity extraction in ways similar to real-world DeFi attacks. The study also tested models on 2,849 recently deployed BNB Chain contracts and found two zero-day vulnerabilities, with potential simulated profits of $3,694. The researchers warn that as AI models become cheaper and more powerful, the time window between contract deployment and exploitation will shrink, especially in DeFi, where funds are publicly visible and exploitable vulnerabilities can be monetized immediately. The study emphasizes that the underlying capabilities are not limited to DeFi and could extend to traditional software and infrastructure.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.