Anilakkus/iStock/Getty Images Plus via Getty Images

Follow ZDNET:
Add us as a preferred source on Google.

**ZDNET’s key takeaways:**
Microsoft’s Copilot, the company’s AI-powered assistant, has evolved to a new stage of personalization and user control. It can now remember or forget specific pieces of information based entirely on direct user instruction. Within the settings menu — specifically under *Settings > User memory* — users can review, adjust, or delete these stored recollections. However, as with any capacity to retain personal data, the expansion of memory capabilities also introduces an equally significant increase in potential privacy and security risks.

Mustafa Suleyman, the CEO of Microsoft’s AI division, announced on X (formerly Twitter) that Copilot’s responses and behavior will now dynamically adapt to each individual’s preferences, guided by the memory parameters users define. This shift essentially means that your interactions with Copilot will become more contextually aware and tailored, shaped not only by what you tell it, but also by what you allow it to remember — or request it to forget.

To illustrate this function in practice, imagine asking Copilot to remember that you are a vegetarian. Later, when you request restaurant recommendations, the system will automatically exclude irrelevant options, ensuring its suggestions align perfectly with your dietary choices. You might also choose to teach Copilot personal facts—such as your partner’s name and birthday—so it can remind you at appropriate times. Conversely, if the relationship ends or that information becomes obsolete, you can simply instruct Copilot to forget that detail, effectively erasing it from memory.

This feature could serve beyond convenience, supporting self-improvement and daily productivity. For instance, someone striving to establish a habit like morning journaling could command Copilot to issue a friendly reminder each day, reinforcing personal discipline. Activating these customized routines simply requires using the commands “Remember” and “Forget,” following Microsoft’s own example scenarios. All stored memories remain accessible and editable through the *User memory* tab, empowering users to maintain awareness and command over their data. The system has already rolled out across both desktop and mobile devices, signaling Microsoft’s readiness to implement user-centric memory design on a broad scale.

**Striking a balance between remembering and forgetting**

Creating an AI assistant capable of genuine usefulness requires developers to carefully navigate the balance between memory retention and privacy protection. When a system remembers every minute detail of a user’s life and preferences, it risks data overload and latency during processing, not to mention increased vulnerability to privacy intrusion. On the other hand, a chatbot that discards all context after each conversation would function no better than a generic search engine, offering little personalization or long-term utility.

Therefore, rather than applying a fixed model, companies like Microsoft have chosen to give end users the autonomy to control how much their AI assistant remembers. This personalization strategy allows individuals to determine how deeply the system can engage with and understand their digital patterns, effectively outsourcing the regulation of AI memory boundaries to users themselves.

**Building increasingly intuitive AI assistants**

Microsoft first introduced the concept of “personalization and memory” for Copilot earlier this year, portraying it as a pivotal milestone on the path toward creating AI companions that evolve in concert with human users. By accumulating a contextual history of user interactions, Copilot gradually develops an intimate understanding of habits, tones, and preferences. This approach mirrors the way social media platforms like Instagram and TikTok learn from user engagement to refine and individualize content recommendations.

In a May blog post, Microsoft elaborated on this principle, explaining that as Copilot observes patterns in how people interact with it, the system enhances its predictive accuracy, producing suggestions that feel authentically tailored — whether that involves recommending a vacation locale or identifying a product aligned with a user’s taste. Essentially, the tool aspires to transform from a static assistant into an adaptive digital companion — one that not only processes queries but also anticipates needs and reflects an understanding of its user.

This innovation emerged shortly after OpenAI enhanced ChatGPT’s own memory capacity, enabling the system to draw upon previous user conversations to refine its responses. Around the same time, Anthropic introduced similar functionality for Claude, allowing it to retrieve contextual information from earlier exchanges. While Claude’s recall ability is active by default, users maintain the option to disable it, affirming the industry-wide trend of emphasizing user control.

All of these developments indicate a broader movement among AI developers: to craft chatbots that transcend mere question-answering roles and evolve into reliable digital confidants — systems capable of retaining experiences, learning, and reshaping their understanding of users over time.

**The inherent risks of remembering**

Despite the evident convenience of AI memory, its expansion brings unavoidable ethical and security challenges. On a technical and infrastructural level, any AI model that stores personal information carries an inherent risk: in the event of a data breach, sensitive material — from names to behavioral patterns — could be exposed. On a psychological level, a chatbot that steadily learns a user’s communication style and worldview could, over time, develop dialogue patterns that reinforce bias or even nurture unhealthy cognitive illusions. Journalistic sources have referred to this emerging concern with the term “AI psychosis,” describing instances where users form emotional dependencies or distorted beliefs around their digital assistants.

Allowing users to modify or disable Copilot’s memory directly represents a vital safeguard, but not all individuals possess the technical knowledge or awareness to manage these privacy controls effectively. Many users remain unaware that their conversational data is stored on external servers or may underestimate the implications of sharing personal details with an AI platform.

From a regulatory standpoint, Europe’s General Data Protection Regulation (GDPR) mandates that companies clearly disclose when personal data — including identifiers and preferences — is being collected and processed. However, the United States currently lacks an equally comprehensive federal standard. Consequently, American users depend largely on the transparency and ethical policies of private tech firms to ensure that their data isn’t misused or retained beyond reasonable necessity.

Ultimately, Microsoft’s introduction of editable memory in Copilot encapsulates both the promise and peril of increasingly personalized AI. It represents an important stride toward human-centric design, giving individuals meaningful agency over their interactions. Yet it simultaneously underscores the urgent need for stronger privacy frameworks and heightened public understanding of data stewardship in the age of intelligent machines.

Sourse: https://www.zdnet.com/article/you-can-now-edit-microsoft-copilots-memories-about-you-heres-how/