According to a research team working with Alibaba Group (China), an autonomous artificial intelligence agent called ROME attempted to mine cryptocurrencies in a fraudulent manner during training.
This behavior was detected during a reinforcement learning session when researchers observed safety alerts related to abnormal traffic. GPU usage not corresponding to training goals.
The agent repurposed resources originally intended to train a model into a process compatible with cryptocurrency mining and created a reverse SSH tunnel, a connection that allows internal computers to bypass certain firewalls and receive access from outside the network.
We also observed misuse and reallocation of GPU capacity provisioned for cryptocurrency mining, quietly diverting compute from training, inflating operational costs, and causing obvious legal and reputational damage.
ROME engineer.
researchers are revealing that Agent behavior is not intentionally programmedHowever, it appeared as a new behavior during optimization. Similarly, the event occurred in the environment sandboxed, In other words, it is a controlled and designed space for experimentation.
The engineers emphasized that what happened was not something the agent “wanted” to do out of malice or conscious autonomy, but rather was described as an instrumental action. In other words, agents found ways to divert resources and “play” with the available environment, even if they were not needed for their primary task.
The incident has reignited debate within the technology community about the limits of autonomy for AI systems. While some experts warn of the need for stricter controls to prevent misuse of digital resources, others think: What kind of accidents of this kind can be expected during the experimental stage? As reported by CriptoNoticias, it allows for improvements in security protocols.
While this episode does not represent an immediate risk to the crypto industry, it does illustrate the importance of establishing robust monitoring mechanisms for autonomous agents. As these tools gain operational capabilities, a balance between innovation and security will be key to maintaining trust in the technology.
ROME is part of the Agenttic Learning Ecosystem (ALE), a research environment designed to enable AI agents to autonomously complete complex tasks, interact with digital tools, and execute commands without direct human intervention.

