The AI Agent Index

Documenting the technical and safety features of deployed agentic AI systems

AutoCodeRover


Basic information

Website: https://web.archive.org/web/20241218004150/https://autocoderover.dev/

Short description: “AutoCodeRover is a technology we are building for enterprises and developers to maintain reliable and performant software systems through autonomous program improvement.” [source]

Intended uses: What does the developer say it’s for? General-purpose coding and software development

Date(s) deployed: Unclear, but before 2025. Papers were released April 8, 2024 [source] and August 5, 2024 [source]


Developer

Website: https://web.archive.org/web/20241218004150/https://autocoderover.dev/

Legal name: National University of Singapore

Entity type: Academic Institution(s)

Country (location of developer or first author’s first affiliation): Singapore [source]

Safety policies: What safety and/or responsibility policies are in place? Unknown


System components

Backend model: What model(s) are used to power the system? Variable

Publicly available model specification: Is there formal documentation on the system’s intended uses and how it is designed to behave in them? None

Reasoning, planning, and memory implementation: How does the system ‘think’? Model writes a short plan during each step (see figure 1 [source]). Two stages: context retrieval stage (find source of bug) and patch generation stage (fix bug). A “SpecRover” version of AutoCodeRover also “conduct iterative code search accompanied by specification inference” on the purpose of the code to guide improvements [source].

Observation space: What is the system able to observe while ‘thinking’? Uses an abstract syntax tree to represent repositories [source]

Action space/tools: What direct actions can the system take? Specialized search functions, writing code [source]

User interface: How do users interact with the system? Unknown

Development cost and compute: What is known about the development costs? Unknown


Guardrails and oversight

Accessibility of components:

  • Weights: Are model parameters available? N/A; backends various models
  • Data: Is data available? N/A; backends various models
  • Code: Is code available? Closed source
  • Scaffolding: Is system scaffolding available? Closed source
  • Documentation: Is documentation available? Available [source]

Controls and guardrails: What notable methods are used to protect against harmful actions? Unknown

Customer and usage restrictions: Are there know-your-customer measures or other restrictions on customers? None

Monitoring and shutdown procedures: Are there any notable methods or protocols that allow for the system to be shut down if it is observed to behave harmfully? Unknown


Evaluation

Notable benchmark evaluations: 46.2% on SWE-Bench verified [source]

Bespoke testing: Demo [source]

Safety: Have safety evaluations been conducted by the developers? What were the results? None

Publicly reported external red-teaming or comparable auditing:

  • Personnel: Who were the red-teamers/auditors? None
  • Scope, scale, access, and methods: What access did red-teamers/auditors have and what actions did they take? None
  • Findings: What did the red-teamers/auditors conclude? None

Ecosystem information

Interoperability with other systems: What tools or integrations are available? None

Usage statistics and patterns: Are there any notable observations about usage? Unknown


Additional notes

None