Moonwell decentralized finance protocol recorded a loss of $1.7 million on February 15th. A critical bug in the smart contract allowed hackers to exploit a vulnerability related to the pricing of cbETH assets within the Base network. This incident highlights the direct involvement of artificial intelligence in the creation of vulnerable code.
This error occurred when the system recorded the price of cbETH as only $1.12, even though the real value of the asset on the market was over $2,200. This significant difference allows malicious users to exploit it. Incorrect assessment of warranty. Moonwell’s team confirmed the amount of financial damage shortly after discovering the Oracle anomaly.
identified by the technical community Who is responsible for the defective code? Located in the official GitHub repository. The entry for “Pull Request” #578 showed an unusual co-author tag. Explicitly credited text “Crows Work 4.6”Anthropic’s artificial intelligence model. This is the first time such a bug has been directly linked to an AI assistant.
A pull request is a process in which programmers propose changes to the main code of a project. At this stage, other developers must review and approve your changes before final implementation. Moonwell contract technical error Passed all human reviews Despite its seriousness. Therefore, the reviewer did not notice the configuration error that the AI initially suggested.
On the portal where Moonwell published a summary of the incident, multiple users reported that USDC loans backed by cbETH were liquidated after the price set above $1.12. Those affected have described what happened as “theft” and are demanding a compensation plan from the platform.
The dangerous rise of “vibe coding”
This event will stimulate discussion about current trends called. vibe coding. This term refers to the practice of programming based on natural language instructions processed by AI. Many developers prioritize delivery speed About deep understanding of software logic. Blindly relying on recommendations from generative models increases risk in critical financial environments.
Various experts have accused big technology companies of promoting this practice to reduce operating costs. But artificial intelligence lacks context about the economic impact of misguided individuals. The ultimate responsibility always lies with the humans who verify the machine’s operation. The Moonwell incident shows that AI tools require closer and more skeptical technical oversight.

