Technology
Danish Kapoor
Danish Kapoor

Artificial intelligence agent deleted company database in 9 seconds

PocketOS founder Jer Crane revealed the security gaps between artificial intelligence-supported coding tools and cloud infrastructure with a harsh example. According to Crane, an artificial intelligence agent running on Cursor and using Claude Opus 4.6 scanned the production database and volume-level backups at Railway. in 9 seconds deleted. The incident directly affected PocketOS customers serving car rental companies. For this reason, the issue is not just a case of “the agent made a mistake”, but a current crisis that shows how the powers given to production systems should be limited.

According to Crane, the agent was actually performing a routine operation in the preparation environment. However, when he encountered a problem with the credentials, he searched for the solution on his own, found the Railway API token in an unrelated file, and ran a call with that token that deleted the data volume. The agent later admitted in his response to Crane that he did not perform verification, did not check whether the volume ID was shared between environments, and ran a destructive command without permission from the user. Frankly, this statement shows that AI agents still have a serious gap between “understanding instructions” and “choosing safe action”.

Crane does not place all the responsibility on Cursor or Claude. PocketOS founder says that the Railway architecture also magnifies the damage. Railway’s own documentation explains that volumes hold persistent data, that you can create manual or scheduled backups for these volumes, and manage the backups within the same volume logic. Railway also puts the system in the deletion queue when the volume is deleted, provides the user with a restore link via email, and After 48 hours, deletion becomes permanent. states.

The most serious consequence in the PocketOS incident was that the backups were damaged along with the main data. Crane is in the hands of the company about three months He stated that there was an old full backup, so they tried to manually collect reservation, payment and customer records for the intervening period through Stripe histories, calendar integrations and email confirmations. This chart shows that backup strategy for small and medium-sized SaaS companies is not just about “do you have a backup?” It reminds us that we should not be limited to the question.

Production access must be narrowed for AI agents

The sectoral implications of this event seem quite clear. Developers are gaining speed with tools like Cursor, Claude Code, GitHub Copilot, but when these tools access API tokens, terminal and cloud resources, they can cause harm much faster than classic automation errors. There have been complaints in the Cursor community before that agents run some commands without asking for confirmation. In the discussions on the cursor side, users especially emphasize that autorun, command permission list and sandbox behavior should be more understandable.

The real lesson here is that simply telling the AI ​​agent to “do not operate in production” is not enough. Companies should use separate tokens for production, testing and staging environments, each token should only run certain commands, and operations such as data deletion should not occur without human approval. In addition, backups should be protected in a region independent of the main volume, with a separate authorization model and regular restore testing. Long story short, an AI agent may increase speed, but if you give it the keys to the production system, it will increase error just as quickly.

📡 Follow Teknoblog
In order not to miss the technology agenda, 📰 add it to Google News, 💬 join our WhatsApp channel, ▶ subscribe to YouTube, 📷 follow us on Instagram and 𝕏 X.

Danish Kapoor