Radhika Rajkumar/ZDNET
Follow ZDNET:
Add us as a preferred source on Google.

**ZDNET’s Expanded Key Takeaways**
The intertwining of artificial intelligence and large-scale technology corporations is increasingly eroding the boundaries of personal privacy. At the same time, Proton’s reputation for delivering secure, encrypted communication tools is drawing rising admiration from users who prioritize confidentiality. Yet Proton’s CEO, Andy Yen, warns of a troubling horizon — a future potentially saturated with autonomous, uncontrollable AI agents operating independently of human oversight.

As artificial intelligence’s prominence expands at an unprecedented rate, the accompanying anxieties regarding user safety, data protection, and ethical use have intensified almost in tandem. Over the past year alone, AI has transformed into a common instrument in the hands of cybercriminals, enabling both seasoned and novice wrongdoers to appropriate data with increasing ease. Simultaneously, these same technologies are being leveraged to scale surveillance programs to previously unthinkable levels, allowing governments, corporations, and even individuals to observe human behavior en masse. AI-driven agents such as *OpenClaw* — once celebrated by technology titans like Nvidia and Meta — have nonetheless demonstrated their volatility, occasionally leaking or obliterating sensitive information in unpredictable ways.

Earlier this month, at the Semafor World Economy summit in Washington, D.C., where more than five hundred corporate executives convened with government representatives to deliberate on the evolving global business climate, discussions surrounding AI’s impact on privacy and cybersecurity took center stage. Among the key voices present was Andy Yen, the chief executive officer of Proton, a company best known for offering an ecosystem of privacy-centered digital services including Proton Mail and Proton VPN. After his panel participation, I spoke with Yen directly about one of the most pressing tensions in modern technology: whether privacy and artificial intelligence can sustainably coexist, how that relationship might evolve in the coming years, and why he believes Proton is uniquely equipped to endure in the face of mounting challenges.

**Privacy in the Collective Consciousness**
The relationship between artificial intelligence and personal privacy is inherently fraught with trade-offs. The general logic asserts that the more expansive and diverse the dataset an AI system can access, the more effectively it can perform — improving accuracy, personalization, and usefulness for both enterprise and individual users. Yet this progress comes at a cost: a direct confrontation between technological effectiveness and the boundaries of personal risk tolerance. Interestingly, this tension has not dampened enthusiasm. Instead, AI adoption has skyrocketed, particularly within sensitive sectors such as healthcare, where efficiency and precision are often valued as highly as confidentiality itself.

Since Proton’s establishment in 2014 — years before artificial intelligence became embedded in the average consumer’s daily routine — the company has positioned itself as a champion of privacy-first alternatives to the offerings of the tech giants Google, Microsoft, and Meta. Nevertheless, according to Yen, the rapid proliferation of AI tools has not necessarily heightened public awareness of data privacy issues. He attributes this somewhat paradoxical reality to a generational disparity between those aware of privacy risks and those capable of combating them.

“There are more people today who understand the importance of privacy, yet lack the technical fluency required to adequately safeguard themselves,” Yen explained. “Middle-aged users, paradoxically, may be the least protected, since we neither share the privacy instincts of our parents nor fully grasp the technological consequences of our digital habits. We adopt technology enthusiastically, but our awareness remains incomplete, leaving us vulnerable.” Despite this generational imbalance, Yen remains convinced that education — consistent, informed, and broadly accessible — will be the ultimate remedy.

“The most effective form of protection,” he added, “comes from teaching people to comprehend the real risks. Once individuals understand what is at stake, most other protective behaviors follow organically.” He further suggested that the current collective indifference toward privacy might diminish as greater social and technological awareness develops over time.

“When we launched Proton in 2014, perhaps only one in ten people truly grasped the underlying business models of platforms like Google and Facebook,” Yen noted. “Today, that number has climbed to around four in ten, and as services such as OpenAI begin to monetize attention and promote algorithmic bias for profit, public awareness continues to expand — perhaps reaching seven in ten.”

Despite appearing indifferent, Yen considers today’s younger generations better prepared to navigate the landscape being reshaped by AI. “Young users,” he observed, “know how major technology companies generate revenue, they understand the logic of targeted advertisements and algorithmic influence — although they often seem not to care. However, awareness without concern is still preferable to ignorance, because an informed public can eventually be persuaded to care.”

**A Shift Toward Privacy-First AI Adoption**
The rising popularity of DuckDuckGo’s privacy-oriented chatbot, *Duck.ai*, which recently recorded a surge in user traffic, exemplifies a growing interest in privacy-centric AI experiences. While such tools still lag behind market leaders like ChatGPT or Anthropic’s Claude, their gradual uptake echoes the pattern visible at Proton. Yen pointed out that within his own organization, *Lumo* — Proton’s encrypted AI chatbot — has quickly become the company’s fastest-growing product.

“AI has become integrated into everyday life,” Yen elaborated. “No matter how cautious we are, we rely on it for writing, organization, and communication. Yet deep down, users still harbor skepticism. The opportunity to enjoy the advantages of AI while maintaining firm guarantees about data confidentiality is transformative. Over time, as individuals weigh convenience against trust, more will inevitably migrate to privacy-focused alternatives.”

**AI’s Most Dangerous Blind Spot**
Even with Proton’s suite of encrypted protections, Yen acknowledges that there are inherent boundaries to what his company can defend against. When asked about AI-related risks that currently exceed Proton’s defenses, he responded without hesitation: AI agents themselves.

“You could possess the strongest encryption protocol imaginable,” he said, “but if an autonomous agent on your device gains authorized access to your Proton Mail account and then malfunctions or acts maliciously by publishing your data publicly, no encryption mechanism can protect you from that. That’s an unavoidable limitation.” Although Proton could, in theory, design its own AI agent insulated from such vulnerabilities, Yen confessed that such an initiative remains speculative and is not part of the current development roadmap.

One promising direction, in Yen’s estimation, involves embracing local AI — that is, artificial intelligence capable of running directly on personal devices rather than relying on distant cloud servers. Proton’s own *Scribe AI* assistant already offers this option to some extent. While today’s consumer hardware still struggles to support the necessary computational load, continuing advancements in device performance make Yen optimistic.

“If you compare the computing capacity of a modern iPhone to that of early smartphones from a decade ago, the difference is exponential. As these gains continue, running smaller, highly efficient models locally will become the norm rather than the exception. Future AI systems will not necessarily grow in complexity or size but in efficiency and practicality,” he explained.

**Safeguarding Future Generations**
In Yen’s view, the surest way to defend the next generation from the predatory data practices of Big Tech lies in preemptive action — ensuring children avoid entanglement in exploitative digital ecosystems from the start. Proton recently introduced a feature allowing parents to reserve their child’s first email address even before birth, offering a symbolic yet meaningful head start in cultivating digital safety.

“For many parents, the turning point in caring about privacy arrives when they have children,” Yen said. “At that moment, they face a crucial choice: entrust their child’s digital footprint to the data-harvesting machine of corporate tech, or begin their online life with a privacy-respecting foundation?” For him, timing is everything.

“If someone begins prioritizing privacy only at age forty, after decades of exposure within exploitative ecosystems, that’s still admirable — but it’s undeniably late. How much better would it be if we could provide the next generation with the right start from day one?”

**Can Privacy-First AI Compete at Scale?**
While the idea of an AI ecosystem that honors privacy sounds promising, its real impact depends on achieving scale — convincing consumers and companies alike to choose secure systems over convenience-driven platforms. The challenge lies in encouraging users to prioritize privacy above the seductive personalization that data-intensive AI can deliver.

When asked about the feasibility of developing powerful AI systems constrained by encryption, Yen confirmed that computation on encrypted data is possible, though the process is significantly more complex and expensive. “For instance,” he explained, “if you compare Google Workspace and Proton Workspace, they may appear similar from the user’s perspective, but our task is at least ten times harder. Encryption imposes extra computational overhead, which increases costs and slows development. Nonetheless, it ensures a product that genuinely protects its users’ data, delivering more authentic value over time.”

Proton’s pricing for its Workspace suite reflects this philosophy of balance between performance and privacy: competitively structured between $12 and $25 monthly depending on plan and billing cycle, with a commitment to avoid annual price hikes or penalties for loyal customers. A company spokesperson emphasized that efficiency within Proton’s operations keeps costs relatively contained despite encryption’s financial burden.

“I don’t perceive any fundamental technical obstacles preventing us from reaching comparable performance,” Yen asserted. “It simply requires time.” He further highlighted that Proton’s business independence — notably the absence of venture capital investors — speaks to the sustainability of its model. “The fact that Proton remains self-financing demonstrates that privacy-respecting business frameworks can, in fact, scale more sustainably than many assume.”

In essence, Yen’s perspective encapsulates a pragmatic yet hopeful vision: while the path toward harmonizing AI innovation with authentic privacy protections is fraught with obstacles — technical, economic, and educational — the direction of travel is clear. A more conscientious public, empowered by education and supported by ethical companies, could still redefine what technological progress means in the age of artificial intelligence.

Sourse: https://www.zdnet.com/article/proton-ceo-andy-yen-interview-ai-privacy-security-semafor/