sankai/iStock/Getty Images Plus
Follow ZDNET:
Add us as one of your preferred sources on Google to keep up with the latest in emerging technology and artificial intelligence.

**ZDNET Key Takeaways**
For those exploring the use of an *agentic browser*—a new category of intelligent web browsers that integrate AI capabilities directly into the user experience—it is worth seriously considering a *local AI* setup rather than relying on remote, cloud-based solutions. Running AI locally not only lessens the overall demand placed on the broader electrical grid, but it also keeps your computational queries and data confined to your own device. This dual advantage both reduces environmental impact and strengthens user privacy.

**Agentic Browsers and the Emerging AI Browser War**
Agentic browsers, in the most literal sense, are storming the castle gates of the tech industry. We appear to be on the cusp of yet another browser revolution—one reminiscent of earlier browser wars, yet distinct in that this time, the defining weapons are ‘intelligent’ automation and integrated AI systems. From my standpoint, this technological clash is both exciting and concerning. Envision a near future where nearly every individual on the planet is using a browser powered by an agentic AI system. The cumulative computational load of those millions of localized AI interactions could demand immense power and resources. As a result, electricity prices could surge dramatically, which in turn would exert a troubling environmental toll on the planet.

**A More Sustainable Solution: Local AI**
This potential crisis does, however, have a sensible and sustainable answer—local AI processing. Instead of outsourcing AI tasks to distant, energy-intensive data centers, computations occur directly on your own computer. This setup substantially reduces dependence on massive external energy infrastructures while maintaining greater control over one’s own data.

**Also:**
Opera’s upcoming agentic browser, Neon, is beginning its rollout to select users. Learn how to join the waitlist to experience this innovation firsthand.

As someone who occasionally needs to use AI-based tools, I make it a point to do so exclusively at the *local* level. For instance, I rely on a local AI system known as **Ollama**. Unfortunately, nearly every agentic browser currently available depends on cloud-driven AI models, which for me, is a dealbreaker. The idea of contributing to additional strain on the energy grid is troubling enough, but my larger concern is privacy. I would far rather process all of my queries privately, on-device, without fear of my data being harvested for model training or user profiling.

Out of the modest selection of agentic browsers that support local AI integration, I have found two potential options: **BrowserOS** and **Opera Neon**. However, at present, only BrowserOS is available to the general public.

**BrowserOS: Local AI, Global Potential**
BrowserOS can be installed across the major operating systems—Linux, macOS, and Windows. To unlock its locally powered AI browsing capabilities, users must install Ollama and download a compatible model, such as **qwen2.5:7b**, which supports agentic operations.

**Also:**
After testing several AI browsers, I found BrowserOS to be one of the most balanced and reliable entrants in the space. It performs on par with cloud-reliant browsers, but crucially, it avoids the privacy compromises and energy inefficiencies those alternatives impose. Once I configured BrowserOS to work with Ollama, I tested it by issuing a complex task: *“Open amazon.com and search for a wireless charging stand that supports the Pixel 9 Pro.”* It required some initial setup to align all components, but when everything was correctly configured, BrowserOS executed the task flawlessly.

One point of caution: running BrowserOS with a fully local AI system can demand considerable system resources. If your computer’s hardware is on the weaker side, performance may suffer, and tasks could lag.

**Also:**
See ZDNET’s ranking of the top 20 AI tools of 2025, along with critical advice on how to evaluate them responsibly.

**Hardware Requirements for Local AI (According to Ollama)**
– *Minimum – 8GB of RAM:* This configuration will allow only small models to run, typically those in the 3B to 7B parameter range.
– *Recommended – 16–32GB of RAM:* For a smoother and more capable experience—able to handle 13B parameter models efficiently—aim for at least 16GB. To work comfortably with models around 30B parameters or larger, consider 32GB.
– *High-End – 64GB or More:* For the most extensive 70B models, you’ll need at least 64GB of memory.

From my own testing, the bare minimum RAM specification is rarely enough in practice. My **System76 Thelio desktop**, equipped with 32GB of memory, manages to perform adequately for my needs. However, if you plan to run larger models or demand faster responsiveness, upgrading to 64GB or more is advisable. Even with 32GB, performance can slow noticeably if you multitask or run resource-intensive services concurrently.

**Also:**
Cybersecurity specialists continue to debate whether AI browsers introduce new privacy and security risks. Read more about the concerns experts have raised.

Provided that your hardware meets these requirements, BrowserOS can fully handle most agentic tasks with stability and reliability. Now, let’s examine how to actually reach that configuration.

**Setting Up BrowserOS with Ollama on Linux**
Assuming that BrowserOS is already installed on your system of choice, you’ll next need to install Ollama. Although Ollama provides installers for both macOS and Windows, the process is also remarkably simple on Linux. Open a terminal and execute this command:
“`bash
curl -fsSL https://ollama.com/install.sh | sh
“`
Once that’s complete, download a model that supports agentic browsing—in our example, `qwen2.5:7b`—using:
“`bash
ollama pull qwen2.5:7b
“`
When the model finishes downloading, you can begin configuring BrowserOS to use Ollama as its AI provider.

**Configuring BrowserOS to Use Ollama**
1. Launch BrowserOS and open its dedicated AI settings page by navigating to:
`chrome://settings/browseros-ai`
2. In the interface that appears, click to enable Ollama as your provider. Then, enter the following configuration details:
– **Provider Type:** Ollama
– **Provider Name:** Ollama Qwen
– **Base URL:** Keep the default unless Ollama runs on another device in your local network—then replace `127.0.0.1` with your server’s IP address.
– **Model ID:** qwen2.5:7b
– **Context Window Size:** Ideally 12800 for high-end systems; reduce this number if your computer struggles.
– **Temperature:** 1
Ensure the correct model is selected, then set Ollama as the default AI provider within BrowserOS.

**Restarting the Ollama Service with CORS Enabled**
For BrowserOS to communicate with Ollama, restart the Ollama service with *Cross-Origin Resource Sharing* enabled. On Linux, execute:
“`bash
sudo systemctl stop ollama
OLLAMA_ORIGINS=”*” ollama serve
“`
For macOS, the same command applies. On Windows, use PowerShell or Command Prompt:
“`powershell
$env:OLLAMA_ORIGINS=”*”; ollama serve
“`
or
“`cmd
set OLLAMA_ORIGINS=* && ollama serve
“`

**Running Your First Local Agentic Task**
Now you can open the agent interface within BrowserOS and issue your first smart browsing request—for instance: *“Open amazon.com and search for a wireless charging stand that supports Pixel 9 Pro.”* BrowserOS will automate the process, retrieving results in a new tab.

Once you adjust to the pacing and learning curve of an agentic workflow, you’ll likely find that such browsers offer a refreshing blend of efficiency, control, and privacy. Best of all, by using your own locally installed models, you can take advantage of AI while minimizing your carbon footprint—and feel just a bit less guilty about your technological choices.

Sourse: https://www.zdnet.com/article/i-tried-the-only-agentic-browser-that-runs-local-ai-and-found-only-one-downside/