Microsoft has announced a major enhancement to its Visual Studio Code editor by introducing an intelligent system of automatic AI model selection within GitHub Copilot. This new functionality, designed with a focus on efficiency and user convenience, will dynamically determine and assign the most appropriate model for a given coding task in order to ensure what the company describes as “optimal performance.” Instead of requiring developers to manually choose between sophisticated yet varied artificial intelligence systems, the software will now perform this decision-making autonomously. In practice, this means that free-tier users of GitHub Copilot will have access to a rotating blend of models—including Claude Sonnet 4, GPT-5, GPT-5 mini, and potentially others—while those subscribed to paid plans will find their experience anchored primarily on Anthropic’s Claude Sonnet 4 model.

The prioritization of Claude Sonnet 4 over OpenAI’s recently unveiled GPT-5 represents, on a broader scale, an implicit acknowledgment from Microsoft that it currently considers Anthropic’s technology superior for coding and software development scenarios. According to individuals with direct knowledge of Microsoft’s internal strategy, the company has been advising its own engineers and in-house developers over the past several months to consistently rely on Claude Sonnet 4, strongly signaling its confidence in the model’s reliability, output quality, and suitability for complex programming tasks.

In fact, this position was articulated explicitly in June, when Julia Liuson, who leads Microsoft’s developer division, conveyed in an internal communication that internal benchmarks had identified Claude Sonnet 4 as the firm’s recommended model for usage in GitHub Copilot. This recommendation, notably, was formalized even before the public release of GPT-5, yet subsequent developments indicate that the guidance has not been altered, further reinforcing Microsoft’s endorsement of Anthropic’s offering.

In parallel with these external partnerships, Microsoft has been deepening its commitment to building its own artificial intelligence ecosystem through substantial and ongoing research and development investments. Mustafa Suleyman, the recently appointed head of Microsoft AI, revealed during a private employee town hall that the company is channeling what he described as “significant investments” toward the training of proprietary models. He highlighted that one such experimental system, known internally as MAI-1-preview, had so far been trained on a cluster of 15,000 H100 GPUs—a number that he emphasized was relatively modest when compared to the much larger clusters typically employed in cutting-edge AI model training endeavors. His statement underscored Microsoft’s long-term ambition to cultivate its in-house capabilities, even while leveraging strategic partners.

Further extending the role of Anthropic’s work across its portfolio, Microsoft is reportedly preparing to integrate some of Anthropic’s models into its flagship enterprise productivity suite, Microsoft 365. According to reporting from *The Information*, certain Copilot features inside Microsoft 365—including components embedded within Excel and PowerPoint—will soon be partially powered by Anthropic’s technology. Internal testing allegedly demonstrated that in specific functions, Anthropic’s models delivered more accurate, efficient, or practical results compared with the latest versions provided by OpenAI. Such findings offer an explanation as to why Microsoft is increasingly weaving Anthropic’s contributions into both development-oriented and productivity-focused contexts.

While deepening its relationship with Anthropic, Microsoft has continued to maintain a multifaceted and financially significant partnership with OpenAI. The two organizations announced a new framework agreement just last week, which could potentially pave the way for OpenAI’s highly anticipated initial public offering. Since 2019, Microsoft has committed in excess of $13 billion to OpenAI, constructing not only a financial alliance but also a web of revenue-sharing arrangements intended to align both organizations’ growth trajectories. Interestingly, the latest deal provides OpenAI with the latitude to make use of competing cloud infrastructure from providers outside Microsoft’s Azure ecosystem—an indication of evolving dynamics in the AI industry and a subtle shift from the previously more exclusive arrangement. In the near future, Microsoft is expected to release additional details that clarify what it is calling the “next phase” of this partnership, thereby signaling that the relationship is poised for another stage of transformation.

Taken together, these developments portray a company carefully balancing external collaborations with Anthropic and OpenAI while simultaneously fostering the growth of its own AI research initiatives. For developers, the immediate implication is clear: the tools embedded within Visual Studio Code and GitHub Copilot will increasingly be powered by models carefully chosen by Microsoft’s systems, reducing the need for user intervention while aiming to maximize performance in real-world coding workflows.

Sourse: https://www.theverge.com/report/778641/microsoft-visual-studio-code-anthropic-claude-4