Digital Transformation

Security Implications of AI Agents on Corporate Workstations

A rundown of the risks presented by AI agents on corporate workstations

David Stocks

An image of a office containing rows of office workstation computers

In our previous article, we explored how advancing technology is breaking down traditional security boundaries that organisations have relied upon. We discussed how the walls between human and automated interaction are becoming increasingly blurred, challenging security controls that have relied on distinguishing the two. Today, we'll dive deeper into a specific aspect of this challenge: the security implications of AI agents operating on corporate workstations.

As organisations begin deploying AI assistance tools across their workforce, we’ll see a significant change in how employees interact with their computers. But this transformation brings new security considerations that many organisations haven't yet explored.

The Current Landscape

Agents that can interact with user desktops are being developed and released at pace. Microsoft's Recall can capture and interpret screen content, creating searchable databases of user activity that users can query in natural language. Google’s Project Mariner promises to automate complex workflows through natural language instructions, interpreting output and taking action within user browsers. Anthropic's Claude can directly interpret and control desktop computer interfaces, while OpenAI recently released their "Operator" agent, which uses a cloud-based browser to access Internet-based resources. There are also open-source options like “Browser Use”, which does what it says on the tin. 

Most of these tools share a common architecture: they combine Large Language Models (LLMs) with Robotic Process Automation (RPA) capabilities. They don’t share a risk profile though - while Recall rightly received attention when it was announced for the security and privacy risks it presented, the full browser or computer control systems should prompt more consideration. Browser-based automation has more constraints than tools with complete system access. Understanding these distinctions is crucial for security teams evaluating their deployment.

The driving force behind these tools is clear: they promise significant productivity gains and will probably deliver them. They hold the promise of tying together the disparate set of tools, communication, and information sources that a human uses in their day to date work. By automating routine tasks and providing intelligent assistance, they could dramatically improve workforce efficiency. But we should be thinking about how we adapt our cyber security defences as a result.

Security implications for organisations not deploying desktop AI agents

First, we’ll tackle the base case: your organisation isn’t going to directly implement desktop AI agents on corporate workstations - at least not yet. In our view, there are still a set of security implications to consider - even if you’re not the ones deploying such systems. Here are some examples, grouped by the entity putting them to use.

Business partners and counterparties

  • AI-generated emails or communication that includes hallucinated detail - for example, an AI generated invoice with incorrect payment details on them. Or misaggregation of your data as relating to another matter, and including it in generated communication between your business partner and a third party.
  • Unintended collection of your data that was expected to be ephemeral or only ever viewed in part, combined with data retention and handling practices that might not align with your expectations.

Consumer Interactions

  • Automated account creation and management at scale. Agents that can combine the scaled creation of new or disposable email addresses or virtual phone numbers, and new accounts on your platform - all from devices and browsers that are indistinguishable from other consumers.
  • Automated form submission that could overwhelm systems or intentionally engage with customer support agents and other support mechanisms to challenge their capacity.

Vendors

  • Less controlled access to your systems and data. Many organisations have been subject to breaches by virtue of the access their technology vendors required to service them. As they deploy agents, there is a risk for unauthorised collection and storage of your data, including sensitive secrets like API keys and credentials.
  • Potential automated processing of sensitive information, such as sensitive employee information involved in processing payroll or other frequently outsourced tasks, that may store or intermingle data between customer tenants on the vendor side if not well managed.

Unauthorised employee use

Organisations using virtual desktop infrastructure (VDI) or enterprise browser solutions to enable employees to work remotely might find these traditional isolation strategies less effective. AI agents running on host systems could potentially:

  • Extract large volumes of information from screen content, bypassing data loss prevention controls, which largely gate the extraction of bulk information in a downloadable way.
  • Operate outside the visibility of security monitoring tools, which would be unable to detect the presence of an agent not running on its infrastructure using traditional detection methods.

Additionally, remote workers could use AI transcription software running on mobile devices or host operating systems to transcribe meetings and retain the records outside of the security boundary of the organisation. This risk has existed for some time for employees who may maliciously or covertly record remote meetings, but the use of transcription as a productivity tool may be seen by employees as justifiable or in service of their employer’s objectives.  

Security implications for organisations deploying desktop AI agents

If your organisation is looking to unlock some of the promised productivity from the use of these tools, then you’ll have a few other considerations to add to the set above (which will still apply). When organisations deploy AI agents on corporate workstations, they're introducing a new type of privileged actor into their environment. These agents inherit the access levels of the logged-in user, including network/VPN access and application permissions. This raises several security issues:

  • Traditional security practices may need reevaluation. Will employees be tempted to leave their workstations unlocked to allow AI agents to continue working? 
  • The model’s errors are your data breach - if an automation workflow includes a step where a person is sent an email, what if the wrong recipient is selected by the agent? This is a common mode of data breach when humans are involved, but also something the Operator system card calls out as a common type of error that occurred in their testing. 
  • The potential for prompt injection when exposed to uncontrolled input, including the emails people send you or sites that may be visited by agents in the course of their work. Malicious actors could induce the agent into performing unintended actions by instructing them through prompts in emails or on websites.

Looking Ahead

As AI agents become more prevalent in corporate environments, organisations need to adapt their security strategies. This could include:

  1. Developing new policies specifically addressing AI agent use and deployment, including those that apply to third parties and vendors. They may need to be supported by new third-party risk assessment requirements, or updated contractual obligations for counterparties.
  2. Implementing enhanced monitoring capabilities to detect automated behavior. Most logging and monitoring capabilities are focused on traditional interactive scenarios by machines and humans. Should we be attempting to detect if the nature of a session is agent-driven or human driven? With machines running through a human interface, should we be more rigorously logging what is displayed to humans, as well as what is downloaded? A lot of this will need to be driven by large software vendors, but it also applies to our own internally developed web applications. 
  3. Reassessing access control models to account for AI agent operations. Single-sign on makes for an easier experience for employees, as well as reduced effort and increased reliability of access grants and revocation for joining and departing workers. But should we increase friction by putting in check points that require human control in our systems before taking sensitive actions or accessing our most sensitive information? The big models have built some check points in, but organisations will know the sensitive points of their workflows more than a generically applicable agent.
  4. Creating incident response procedures for AI-related security events. Organisations create playbooks for many different types of incident, from ransomware to email account compromise. Creating playbooks for AI-agent driven security incidents can help us expedite our response.

The integration of AI agents into corporate workflows is likely inevitable, driven by their potential productivity benefits. However, organisations must approach this integration thoughtfully, with a clear understanding of the security implications and a strategy to address them.

Other articles

Stay informed with
Germane Insights