The AI Agent Index

Documenting the technical and safety features of deployed agentic AI systems

Proposer-Agent-Evaluator


Basic information

Website: https://web.archive.org/web/20250115041519/https://yanqval.github.io/PAE/

Short description: A system for vision foundation models to propose and practice skills in new environments [source]

Intended uses: What does the developer say it’s for? Complete general web-based tasks

Date(s) deployed: December 17, 2024 [source]


Developer

Website: https://web.archive.org/web/20250115041519/https://yanqval.github.io/PAE/

Legal name: University of California Berkeley (et al.) [source]

Entity type: Academic Institutions, Industry Organization

Country (location of developer or first author’s first affiliation): California, USA [source]

Safety policies: What safety and/or responsibility policies are in place? None


System components

Backend model: What model(s) are used to power the system? LLaVa-34B and -7B [source]

Publicly available model specification: Is there formal documentation on the system’s intended uses and how it is designed to behave in them? None

Reasoning, planning, and memory implementation: How does the system ‘think’? PAE uses chain-of-thought reasoning before taking actions [source].

Observation space: What is the system able to observe while ‘thinking’? PAE’s observation space consists of a screenshot of the current web page with the different interactive elements identified [source].

Action space/tools: What direct actions can the system take? “The action space contains primitive web operations such as clicking on links and typing into text boxes” [source].

User interface: How do users interact with the system? The primary way to interact with PAE is with their open-sourced code and API [source].

Development cost and compute: What is known about the development costs? 30k trajectories * 1/512 hour/trajectory = ~60 hours of AWS P4 instance (8xA100 40G) per task, see Figure 6 [source]


Guardrails and oversight

Accessibility of components:

  • Weights: Are model parameters available? Available [source]
  • Data: Is data available? Available [source]
  • Code: Is code available? Available [source]
  • Scaffolding: Is system scaffolding available? Available [source]
  • Documentation: Is documentation available? Available [source]

Controls and guardrails: What notable methods are used to protect against harmful actions? PAE allows a maximum of 10 steps. Additionally, it has a system prompts which steers the system away from harmful/unhelpful behaviors [source].

Customer and usage restrictions: Are there know-your-customer measures or other restrictions on customers? None

Monitoring and shutdown procedures: Are there any notable methods or protocols that allow for the system to be shut down if it is observed to behave harmfully? N/A Open-source


Evaluation

Notable benchmark evaluations: WebVoyager (33%), WebArena Easy (25.7%) [source]

Bespoke testing: They test generalization on 85 real-world websites not included in other benchmarks. They test the automated evaluator with human baselines [source].

Safety: Have safety evaluations been conducted by the developers? What were the results? None

Publicly reported external red-teaming or comparable auditing:

  • Personnel: Who were the red-teamers/auditors? None
  • Scope, scale, access, and methods: What access did red-teamers/auditors have and what actions did they take? None
  • Findings: What did the red-teamers/auditors conclude? None

Ecosystem information

Interoperability with other systems: What tools or integrations are available? Any web interface

Usage statistics and patterns: Are there any notable observations about usage? Github repo has 42 stars and 2 forks [source].


Additional notes

None