The AI Agent Index

Documenting the technical and safety features of deployed agentic AI systems

Gru


Basic information

Website: https://gru.ai/

Short description: Gru has four modes: “1) Assistant Gru: Helps users solve standalone technical issues, which is now in public use; 2) Test Gru: Generates unit test code automatically; 3) Bug Fix Gru: Fixes bugs based on user issues automatically; and 4) Babel Gru: Assists in building end-to-end projects” [source]

Intended uses: What does the developer say it’s for? General-purpose coding and software development.

Date(s) deployed: Gru.ai founded in July, 2023 [source]. Product launch date is unclear.


Developer

Website: https://web.archive.org/web/20241220064538/http://www.gru.ai/

Legal name: Babel Inc., Babelcloud Inc [source]

Entity type: Corporation [source]

Country (location of developer or first author’s first affiliation): Incorporation: Delaware, USA (Babelcloud Inc. (2474686)) [source]

Safety policies: What safety and/or responsibility policies are in place? Unknown


System components

Backend model: What model(s) are used to power the system? Unknown

Publicly available model specification: Is there formal documentation on the system’s intended uses and how it is designed to behave in them? None

Reasoning, planning, and memory implementation: How does the system ‘think’? Unknown

Observation space: What is the system able to observe while ‘thinking’? Unknown

Action space/tools: What direct actions can the system take? Unknown

User interface: How do users interact with the system? Unknown

Development cost and compute: What is known about the development costs? Unknown


Guardrails and oversight

Accessibility of components:

  • Weights: Are model parameters available? Closed source
  • Data: Is data available? Closed source
  • Code: Is code available? Closed source
  • Scaffolding: Is system scaffolding available? Closed source
  • Documentation: Is documentation available? Unavailable

Controls and guardrails: What notable methods are used to protect against harmful actions? Unknown

Customer and usage restrictions: Are there know-your-customer measures or other restrictions on customers? There is documentation on how the product can not be used [source].

Monitoring and shutdown procedures: Are there any notable methods or protocols that allow for the system to be shut down if it is observed to behave harmfully? Warning, suspension or termination of account if a user does not abide by the Terms of Service [source].


Evaluation

Notable benchmark evaluations: 57% on SWE-bench verified [source]

Bespoke testing: Demo for Assistant Gru is available when logged in with a github account [source]

Safety: Have safety evaluations been conducted by the developers? What were the results? None

Publicly reported external red-teaming or comparable auditing:

  • Personnel: Who were the red-teamers/auditors? None
  • Scope, scale, access, and methods: What access did red-teamers/auditors have and what actions did they take? None
  • Findings: What did the red-teamers/auditors conclude? None

Ecosystem information

Interoperability with other systems: What tools or integrations are available? None

Usage statistics and patterns: Are there any notable observations about usage? Unknown


Additional notes

None