Opera Neon
BrowserOpera
Product overview
Name of Agent: Opera Neon
Who is using it?: end-users
Website: (https://www.operaneon.com/, archived)
Category: Browser
Company & accountability
Developer: Opera
Name of legal entity: Opera Norway AS
For profit company?: Yes
Parent company?: Opera
AI safety/trust framework: None found
Technical capabilities & system architecture
Model specifications: "Neon runs on Opera’s AI engine, which is model‑agnostic. We use different models depending on the task. Today, our primary providers are OpenAI and Google’s general‑purpose models. We also use specialized models for image generation, speech‑to‑text, text‑to‑speech, and so on. The agent picks the right model for the job—if you ask it to generate an image, it calls an image model; if you ask it to research, it uses the model we’ve found best for browsing and synthesis. This mix can evolve over time." (link, archived)
User interface and interaction design: Agent works in the browser in a sandboxed browser environment that's similar to a user starting with a blank Opera browser. Sidebar shows agent's action trace and has chatbox for the user to prompt agent
User roles: Operator (directing the agent to complete tasks), Executor (user can take additional actions themselves after the agent has done working). No thumb up/down button to give feedback in "Do" mode, but is there in chat mode
Component accessibility: Closed source
Autonomy & control
Autonomy level and planning depth: L3: agent asks user for feedback and plan confirmation before taking more complex actions. Has an explicit "needs user action" UI to set the agent off at some points in the trajectory
Execution monitoring, traces, and transparency: Visible CoT and action trace documenting all activity
Emergency stop and shut down mechanisms and user control: User can pause/stop the agent at any time
Usage monitoring and statistics and patterns: User can observe agent activity in the browser and CoT, unclear if other mechanisms exist
Ecosystem interaction
Safety, evaluation & impact
Technical guardrails and safety measures: Technical safeguards (link, archived) are built into the browser, prompt-injection safeguards by analyzing prompts for potentially malicious characteristics, human-in-the-loop approach, pausing for user interaction for sensitive actions like completing transactions or downloading files, blacklists high-risk pages (e.g., banking sites) to prevent AI agents from accessing/acting on them
What types of risks were evaluated?: None found
(Internal) safety evaluations and results: None found
Third-party testing, audits, and red-teaming: None found
Benchmark performance and demonstrated capabilities: None found