External Threat Notifications: A Strategy for Response
A practical framework for organisations to evaluate and respond to external cyber threat notifications based on source reliability and information actionability.
A rundown of the risks presented by AI agents on corporate workstations
In our previous article, we explored how advancing technology is breaking down traditional security boundaries that organisations have relied upon. We discussed how the walls between human and automated interaction are becoming increasingly blurred, challenging security controls that have relied on distinguishing the two. Today, we'll dive deeper into a specific aspect of this challenge: the security implications of AI agents operating on corporate workstations.
As organisations begin deploying AI assistance tools across their workforce, we’ll see a significant change in how employees interact with their computers. But this transformation brings new security considerations that many organisations haven't yet explored.
Agents that can interact with user desktops are being developed and released at pace. Microsoft's Recall can capture and interpret screen content, creating searchable databases of user activity that users can query in natural language. Google’s Project Mariner promises to automate complex workflows through natural language instructions, interpreting output and taking action within user browsers. Anthropic's Claude can directly interpret and control desktop computer interfaces, while OpenAI recently released their "Operator" agent, which uses a cloud-based browser to access Internet-based resources. There are also open-source options like “Browser Use”, which does what it says on the tin.
Most of these tools share a common architecture: they combine Large Language Models (LLMs) with Robotic Process Automation (RPA) capabilities. They don’t share a risk profile though - while Recall rightly received attention when it was announced for the security and privacy risks it presented, the full browser or computer control systems should prompt more consideration. Browser-based automation has more constraints than tools with complete system access. Understanding these distinctions is crucial for security teams evaluating their deployment.
The driving force behind these tools is clear: they promise significant productivity gains and will probably deliver them. They hold the promise of tying together the disparate set of tools, communication, and information sources that a human uses in their day to date work. By automating routine tasks and providing intelligent assistance, they could dramatically improve workforce efficiency. But we should be thinking about how we adapt our cyber security defences as a result.
First, we’ll tackle the base case: your organisation isn’t going to directly implement desktop AI agents on corporate workstations - at least not yet. In our view, there are still a set of security implications to consider - even if you’re not the ones deploying such systems. Here are some examples, grouped by the entity putting them to use.
Business partners and counterparties
Consumer Interactions
Vendors
Unauthorised employee use
Organisations using virtual desktop infrastructure (VDI) or enterprise browser solutions to enable employees to work remotely might find these traditional isolation strategies less effective. AI agents running on host systems could potentially:
Additionally, remote workers could use AI transcription software running on mobile devices or host operating systems to transcribe meetings and retain the records outside of the security boundary of the organisation. This risk has existed for some time for employees who may maliciously or covertly record remote meetings, but the use of transcription as a productivity tool may be seen by employees as justifiable or in service of their employer’s objectives.
If your organisation is looking to unlock some of the promised productivity from the use of these tools, then you’ll have a few other considerations to add to the set above (which will still apply). When organisations deploy AI agents on corporate workstations, they're introducing a new type of privileged actor into their environment. These agents inherit the access levels of the logged-in user, including network/VPN access and application permissions. This raises several security issues:
As AI agents become more prevalent in corporate environments, organisations need to adapt their security strategies. This could include:
The integration of AI agents into corporate workflows is likely inevitable, driven by their potential productivity benefits. However, organisations must approach this integration thoughtfully, with a clear understanding of the security implications and a strategy to address them.