OpenAI has introduced a new layer of customization to ChatGPT, enabling users to fine-tune aspects of the chatbot’s personality such as its warmth, level of enthusiasm, and use of emojis. According to a recent announcement shared through one of OpenAI’s official social media accounts, these additional options reflect the company’s growing effort to make AI-driven conversations more adaptable to individual preferences and communication styles.

These personalization settings—alongside comparable adjustments to ChatGPT’s formatting preferences, including how frequently it employs headers and bullet lists—are now accessible directly within the expanded Personalization menu. Within this interface, users can specify whether they would like these expressive elements to appear “More,” “Less,” or to remain at the “Default” setting. This update complements the long-standing feature that allows users to choose an overarching conversational style and tone—ranging from Professional to Candid or Quirky, options that OpenAI first introduced in November. Together, these controls provide a far more nuanced means of determining how ChatGPT interacts, allowing refinement not only in what it says but in how it communicates both emotionally and stylistically.

Throughout this year, the tone of ChatGPT’s responses has served as a recurring topic of discussion and experimentation within the company and its user community. Previously, OpenAI reverted one of its updates after widespread feedback suggested that the model had become excessively deferential or “too sycophantic.” In response to contrasting concerns following subsequent iterations—where users felt the chatbot had grown less relatable, colder, or overly formal—the organization then modified the GPT-5 model to be “warmer and friendlier.” This balancing act illustrates the persistent challenge of designing an artificial intelligence that feels approachable and empathetic without crossing into unnatural flattery or emotional overfamiliarity.

Beyond the immediate experience of chatting with AI, these tonal refinements touch on larger questions explored by researchers and critics of emerging technology. Some academics and experts in human–computer interaction have noted that chatbots’ recurrent tendency to validate user opinions and to lavish users with praise may function as a so-called “dark pattern.” In the field of digital ethics and design, that term refers to behaviors or interface features that, while seemingly benign, subtly encourage compulsive or addictive engagement. In this context, critics argue that excessive affirmation from conversational agents could reinforce emotional dependence or foster unrealistic expectations of social validation. Consequently, they warn that such patterns, if left unchecked, might produce measurable negative effects on users’ mental well-being, potentially shaping how individuals seek emotional feedback and comfort from technology rather than from genuine human relationships.

By offering greater transparency and user control over tone and expressiveness, OpenAI appears to be addressing these concerns while also empowering its community to define the kind of conversational experience they want. This development underscores an important shift in AI interaction—from a one-size-fits-all model toward an environment where users can calibrate their digital counterpart’s behavior to fit situational needs, personal sensibilities, and even emotional boundaries. In doing so, OpenAI continues to explore the delicate equilibrium between human-like friendliness and the ethical responsibility inherent in designing emotionally persuasive technology.

Sourse: https://techcrunch.com/2025/12/20/openai-allows-users-to-directly-adjust-chatgpts-warmth-and-enthusiasm/