The AI Agent Index

Documenting the technical and safety features of deployed agentic AI systems

SWE-Agent


Basic information

Website: https://web.archive.org/web/20241127010402/https://github.com/princeton-nlp/SWE-agent

Short description: “SWE-agent lets your language model of choice (e.g. GPT-4o or Claude Sonnet 3.5) autonomously use tools to: fix issues in real GitHub repositories, perform tasks on the web, find cybersecurity vulnerabilities (by solving Capture The Flag challenges), or any custom task.” [source]

Intended uses: What does the developer say it’s for? General-purpose codin, software development, and web-browsing tasks.

Date(s) deployed: April 15, 2024 [source]


Developer

Website: https://web.archive.org/web/20241127010402/https://github.com/princeton-nlp/SWE-agent

Legal name: Princeton University [source]

Entity type: Academic Institution(s)

Country (location of developer or first author’s first affiliation): New Jersey, USA [source]

Safety policies: What safety and/or responsibility policies are in place? None


System components

Backend model: What model(s) are used to power the system? Variable. They use GPT 4 turbo and Claude 3 Opus in the paper [source]

Publicly available model specification: Is there formal documentation on the system’s intended uses and how it is designed to behave in them? None

Reasoning, planning, and memory implementation: How does the system ‘think’? Has a short “discussion” phase before each command [source]. Writes thoughts, actions, observations to its context window. In case of failed actions that get fixed, these are removed. It works using “configurable agent-computer interfaces (ACIs) to interact with isolated computer environments.” [source]

Observation space: What is the system able to observe while ‘thinking’? Inputs, terminal outputs, and filesystem.

Action space/tools: What direct actions can the system take? Commands to search, view, and edit files,

User interface: How do users interact with the system? The user must run the code manually.

Development cost and compute: What is known about the development costs? Unknown


Guardrails and oversight

Accessibility of components:

  • Weights: Are model parameters available? N/A; backends various models
  • Data: Is data available? N/A; backends various models
  • Code: Is code available? Available [source]
  • Scaffolding: Is system scaffolding available? Available [source]
  • Documentation: Is documentation available? Available [source]

Controls and guardrails: What notable methods are used to protect against harmful actions? None by default except for individual users monitoring and intervening manually. Uses docker containers to ensure reproducible and sandboxed execution [source].

Customer and usage restrictions: Are there know-your-customer measures or other restrictions on customers? None

Monitoring and shutdown procedures: Are there any notable methods or protocols that allow for the system to be shut down if it is observed to behave harmfully? Depends on what is implemented in a specific configuration


Evaluation

Notable benchmark evaluations: 33.6% on SWE-bench Verified [source]

Bespoke testing: Demo [source]

Safety: Have safety evaluations been conducted by the developers? What were the results? None

Publicly reported external red-teaming or comparable auditing:

  • Personnel: Who were the red-teamers/auditors? None
  • Scope, scale, access, and methods: What access did red-teamers/auditors have and what actions did they take? None
  • Findings: What did the red-teamers/auditors conclude? None

Ecosystem information

Interoperability with other systems: What tools or integrations are available? Only coding, terminal, and filesystem interfaces by default.

Usage statistics and patterns: Are there any notable observations about usage? 1.4k forks, 14.2k stars Available [source]


Additional notes

None