Jules
Basic information
Short description: AI coding / software engineering agent based on Google’s Gemini 2 model.
Intended uses: What does the developer say it’s for? Completing complex coding / software engineering tasks [source]
Date(s) deployed: Not deployed, “We’re making Jules available for a select group of trusted testers today, and we’ll make it available for other interested developers in early 2025.” Debut Dec 11 [source]
Developer
Website: https://web.archive.org/web/20241231224948/https://deepmind.google/
Legal name: Google LLC [source]
Entity type: LLC
Country (location of developer or first author’s first affiliation): Incorporation: Delaware, USA (GOOGLE LLC 3582691) [source]
Safety policies: What safety and/or responsibility policies are in place? Several relevant commitments [source] [source] [source]
System components
Backend model: What model(s) are used to power the system? Gemini 2.0 [source]
Publicly available model specification: Is there formal documentation on the system’s intended uses and how it is designed to behave in them? None
Reasoning, planning, and memory implementation: How does the system ‘think’? Unknown
Observation space: What is the system able to observe while ‘thinking’? Text. Possibly other modalities also (as Gemini 2 is multimodal) [source]
Action space/tools: What direct actions can the system take? Modifying files, and preparing pull requests [source].
User interface: How do users interact with the system? Unclear. This [source] contains a short video showing a GUI where the user describes the task they want Jules to complete and it returns code and diffs to files.
Development cost and compute: What is known about the development costs? Unknown
Guardrails and oversight
Accessibility of components:
- Weights: Are model parameters available? Closed source
- Data: Is data available? Closed source
- Code: Is code available? Closed source
- Scaffolding: Is system scaffolding available? Closed source
- Documentation: Is documentation available? Unavailable
Controls and guardrails: What notable methods are used to protect against harmful actions? Unknown
Customer and usage restrictions: Are there know-your-customer measures or other restrictions on customers? Currently behind a waitlist
Monitoring and shutdown procedures: Are there any notable methods or protocols that allow for the system to be shut down if it is observed to behave harmfully? Unknown
Evaluation
Notable benchmark evaluations: 52.2% on SWE-bench Verified [source]
Bespoke testing: None
Safety: Have safety evaluations been conducted by the developers? What were the results? None
Publicly reported external red-teaming or comparable auditing:
- Personnel: Who were the red-teamers/auditors? None
- Scope, scale, access, and methods: What access did red-teamers/auditors have and what actions did they take? None
- Findings: What did the red-teamers/auditors conclude? None
Ecosystem information
Interoperability with other systems: What tools or integrations are available? Can prepare pull requests suggesting GitHub integration [source]
Usage statistics and patterns: Are there any notable observations about usage? None
Additional notes
None