A security flaw in the rapidly growing AI coding platform Orchids allowed a hacker to take full control of a BBC journalist's laptop on February 16, 2026. The breach happened in seconds, requiring no clicks, downloads, or warnings from the victim. This "zero-click" takeover highlights new cybersecurity risks as artificial intelligence tools gain more autonomy over computer systems.[orbitaltoday+1]
How the AI Breach Unfolded
Cyber-security researcher Etizaz Mohsin demonstrated the critical vulnerability by targeting a test project on the reporter's spare machine. He subtly inserted a malicious change into the AI-generated code within the Orchids platform. The system, designed to write and run code based on user instructions, accepted and executed the altered code without human intervention.[orbitaltoday+1]
Moments after the code ran, a new file appeared on the laptop's desktop. The computer's wallpaper abruptly changed to a robot skull, displaying the ominous message: "you are hacked." This incident showcased how an AI application's convenience can inadvertently create a silent backdoor, exposing devices to significant danger.[orbitaltoday+1]
Unlike most cyber attacks that rely on trickery, such as convincing users to click malicious links or open infected files, this attack required no user interaction. The harmful code operated directly inside the trusted AI project itself. This gave Mohsin remote access to the machine, allowing him to view and edit files.[orbitaltoday+1]
A criminal exploiting this same flaw could install spyware, steal financial data, or even activate the laptop's cameras and microphones. Mohsin warned that "the whole proposition of having the AI handle things for you comes with big risks." He reported the issue to Orchids weeks before the public demonstration.[orbitaltoday+1]
Orchids, founded in 2025, boasts around one million users. The company did not publicly respond to the report before the demonstration. They later indicated that their small team might have missed earlier warnings due to being overwhelmed.[orbitaltoday]
The Rise of Autonomous AI and New Risks
The incident with the Orchids platform underscores a broader concern about AI coding tools and their increasing ability to automate tasks. These tools promise faster development and lower costs, enabling businesses and hobbyists to create software without extensive technical skills. However, experts warn that this automation, without proper review and security protocols, introduces fresh dangers.[orbitaltoday]
Professor Kevin Curran of Ulster University noted that AI-generated projects often lack strict testing and documentation. This can allow hidden weaknesses to spread across thousands of software builds. The emergence of "agentic AI" further complicates the landscape, as software now carries out complex actions on user devices with minimal oversight.[orbitaltoday]
These advanced AI systems can manage files, send messages, and execute commands autonomously. A flaw in one layer of such an AI system can potentially expose an entire machine. While Mohsin has not found the same vulnerability in rival platforms, the demonstration emphasizes the expanded "attack surface" when users grant AI deep system access.[orbitaltoday]
Leading AI models like Anthropic's Claude, with its "Computer Use" feature released in October 2024, exemplify how AI is gaining direct control over computer interfaces. This technology allows AI to "see" a screen, interpret its elements, and then simulate human actions like moving a cursor, clicking buttons, and typing text.This capability essentially allows AI to become a virtual operator, blurring the lines between human and machine interaction.[futurehumanism+5]
The process involves the AI capturing screenshots to understand what is displayed, identifying buttons, text fields, and menus, and then executing actions through simulated inputs. A feedback loop ensures the system observes the results of its actions and adjusts its next steps accordingly.[edwardtechnology+3]
Microsoft Certified Professional Errol Janusz also demonstrated an autonomous AI agent taking full control of a Windows 11 computer in November 2025. This agent could navigate complex workflows, recognize applications, and execute multi-step tasks independently, controlling the keyboard and mouse with zero human interaction.[youtube]
Security Implications and User Advice
The ability of AI to control computers autonomously brings significant security considerations. Risks include "prompt injection," where malicious content on websites could theoretically manipulate the AI. There is also the danger of "credential exposure," as the AI sees everything on screen, including sensitive information. Unintended actions resulting from the AI misinterpreting instructions pose another threat.[futurehumanism]
Even though these AI control features are in their early or beta stages, developers are exploring their potential for automating repetitive processes, software testing, and open-ended research. Companies like Anthropic acknowledge that their AI's ability to use computers is still imperfect and can make mistakes, such as struggling with scrolling or dragging.[futurehumanism+1]
Experts urge caution for users engaging with experimental AI tools. They recommend running such applications on separate machines whenever possible to isolate potential risks. Using limited or disposable accounts can also mitigate the impact of a breach. Furthermore, users should carefully review and restrict permissions before granting AI full system access.[orbitaltoday]
The rapid growth of AI coding and autonomous agent technologies shows no signs of slowing down. As AI systems become more integrated into daily computing, robust security controls must evolve at an equivalent pace. Otherwise, the promise of effortless creation and automation may come with unforeseen and significant security costs.[orbitaltoday]



