Copilot Agents

Enterprise

Microsoft

Product overview

Name of Agent: Copilot Agents
Short description of agent: "Agents built in Microsoft Copilot Studio use AI to automate and execute business processes, working alongside or on behalf of a person, team, or organization" (link, archived)
Date of release: 21/10/2024 (link, archived)
Advertised use: "Retrieve information: Query systems like HR, CRM, or financial databases. Help with basic tasks: Submit expense reports, update CRM systems, create IT help desk tickets. Perform specialized roles: Facilitate meetings, compile research reports, analyze data sets. Automate processes: Handle financial planning, sales qualification, and supplier communications." (link, archived)
Monetisation/Usage price: "Copilot Credit packs, customers pay up front ($200.00/pack/month for 25,000 Copilot Credits). The pay-as-you-go meter requires no up-front license commitment and allows customers to pay only for the Copilot Credit capacity they use at the end of the billing period." See licensing guide for details. (link, archived)
Who is using it?: enterprises
Category: Enterprise

Company & accountability

Developer: Microsoft
Name of legal entity: Microsoft Corporation (link, archived)
Place of legal incorporation: Redmond, Washington (link, archived)
For profit company?: Yes
Parent company?: Microsoft
Governance documents analysis: ToU (link, archived)
AI safety/trust framework: Responsible AI Standard v2 (link, archived)
Compliance with existing standards: ISO 42001 (link, archived). Your control over your data is reinforced by Microsoft's commitment to comply with broadly applicable privacy laws, such as the GDPR, and privacy standards, such as ISO/IEC 27018. (link, archived)

Technical capabilities & system architecture

Model specifications: Uses various different frontier models including GPT-4 and Claude models (link, archived). The models used change with updates (link, archived).
Observation space: Web and organisational data (e.g. emails, chats, and files) (link, archived)
Action space: Text, and actions within Microsoft 365 apps, e.g. building slides in powerpoint.
Memory architecture: Has a memory system, details (here, archived).
User interface and interaction design: [studio] form-based interface [user] chatbot
User roles: Designer, Operator, Executor, Examiner
Component accessibility: Closed source

Autonomy & control

Autonomy level and planning depth: Designer is L1-L2: user can do most things manually themselves and have the agent modify things here and there, but can also have the agent output a reasonably detailed template for them to continue working off of. Resulting agents are L2-L4: outputs have affordances that lets users continue the work inside of Microsoft applications. The more autonomous use cases don't seek user feedback much during execution
User approval requirements for different decision types: Agent comes back to the user for confirmation/next steps
Execution monitoring, traces, and transparency: Unclear if the reasoning is enabled in models that power the agents
Emergency stop and shut down mechanisms and user control: User can pause/stop the agent at any time
Usage monitoring and statistics and patterns: Available via Microsoft Power Platform admin center

Ecosystem interaction

Identify to humans?: - In the UI, AI-generated content is accompanied by a disclaimer to inform users (link, archived) - C2PA standard for all images created with Copilot ((link, archived), (link, archived))
Identifies technically?: - Web access depends on how an agent is configured - Custom agents can use third party web services for search (link, archived) - When Microsoft 365 Copilot references web content, it does so via the Bing Search service (link, archived) - Bing search results from Bingbot (Bing crawler) that has a dedicated user-agent string and IP ranges ((link, archived), (link, archived), (link, archived)) - The computer use tool can perform tasks on a user's browser given prompts (link, archived)
Interoperability standards and integrations: Copilot Studio supports MCP tools and resources (link, archived) Microsoft 365 Copilot agents built via Copilot Studio can connect to external agents over the Agent2Agent (A2A) protocol (Preview), including using /.well-known/agent.json metadata discovery (link, archived)
Web conduct: Microsoft 365 Copilot’s web grounding works by generating a query and sending it to the Bing search service. Bingbot obeys robots.txt ((link, archived), (link, archived))

Safety, evaluation & impact

Technical guardrails and safety measures: None found
Sandboxing and containment approaches: None found
What types of risks were evaluated?: Microsoft says it evaluated risks from jailbreaks, harmful content, and ungrounded content via “responsible AI evaluations” for Microsoft 365 Copilot (link, archived)
(Internal) safety evaluations and results: Microsoft reports conducting “extensive red team testing” for Microsoft 365 Copilot at both the model level (with OpenAI) and the application level (Copilot experiences) before public availability, with ongoing red teaming post-release (link , archived)performs internal red teaming and third-party penetration testing and assesses against OWASP Top 10 for LLMs, but does not publish the resulting findings on the public page (instead pointing to the Service Trust Portal) (link, archived)
Third-party testing, audits, and red-teaming: Microsoft states it commissions third-party assessments that include penetration testing for Microsoft 365 Copilot, including evaluation against traditional vulnerabilities and the OWASP Top 10 for LLMs (link, archived)
Benchmark performance and demonstrated capabilities: None found
Bug bounty programmes and vulnerability disclosure: Yes (link, archived)
Any known incidents?: CVE-2025-32711, described as an AI command injection issue that could allow information disclosure over a network (link, archived)