Just last month, the song “Walk My Walk” ascended to the coveted number-one position on Billboard’s Country Digital Song Sales chart, signaling both its immediate popularity and its resonance within the country music sphere. The composition, a rhythmically compelling stomp-and-clap track infused with a brooding, reflective energy, features lyrics that speak to gritty survival and perseverance. Although these words appear to be sung by a rugged cowboy figure, complete with the classic hat on its Spotify profile, the performer behind Breaking Rust is not a person at all but an advanced artificial intelligence project. Despite the aesthetic illusion of humanity, the creator is a machine learning system meticulously trained to produce songs that replicate the essence of authentic country artists. Yet not everyone views the success of this digital performer with admiration. Country-rap musician Blanco Brown has accused the team behind Breaking Rust of crafting a song that mirrors his distinctive genre-blending style, alleging that AI was employed to imitate his creative voice. When questioned about the origins and composition of the track, the entity or person responsible for Breaking Rust remained silent, declining to clarify how the sound was generated.

This situation represents only the newest chapter in the rapidly evolving saga of AI-generated music—an arena where questions of authorship, originality, and ownership intermix with cutting-edge innovation. As artificial intelligence gains proficiency in reproducing human voices, tones, and emotional nuance, it becomes increasingly difficult for listeners to discern whether a song was created by a living artist or synthesized through algorithms. Over the past two years, countless AI-generated tracks have exploded across digital platforms, going viral on social networks and streaming services at a pace that major record labels often struggle to manage. Many of these songs are nearly indistinguishable from hit tracks by beloved performers, yet they are created using nothing more than text prompts and machine-learning models trained on existing music.

When such AI-produced tracks first started amassing millions of plays, record labels responded with a barrage of defensive measures, including formal threats and lawsuits aimed at halting the use of copyrighted material to train generative models. Universal Music Group (UMG), a major player in the global recording industry, led the charge—forcing platforms such as YouTube to remove unauthorized AI recreations of famous artists’ performances. Spotify subsequently began purging entire playlists of automatically generated songs that were apparently streamed by bots to manipulate earnings. Among the most famous incidents was a viral track that appeared to feature Drake and The Weeknd but was, in truth, an AI creation produced by Ghostwriter, an anonymous artist who performs only while shrouded in white garments and dark sunglasses. These early confrontations exposed an uneasy balance between technological progress and the protection of creative rights.

Recently, however, the climate seems to be shifting. Instead of waging perpetual legal wars against innovators, music conglomerates have begun to explore collaboration. Warner Music Group (WMG), for example, settled its lawsuit against the AI music platform Suno, an application capable of generating melodies in the recognizable styles of artists such as ABBA and Chuck Berry. This settlement quickly transformed into a formal partnership, with WMG’s CEO Robert Kyncl proclaiming the arrangement “a victory for the creative community.” Likewise, UMG reached an accord with AI company Udio, resolving a high-profile copyright dispute and announcing plans for a subscription-based service that would combine licensed music from UMG’s roster with generative AI tools. These collaborations mark a significant transformation in strategy—from resistance and restriction to cautious acceptance and integration of AI into the industry’s economic model.

Yet even with such partnerships, a deep philosophical and financial tension lingers. As Mark Mulligan, founder and senior music analyst at MIDiA Research, observes, every minute that a listener spends engaged with an AI-generated track inevitably detracts from time that could have been spent listening to a human-made song. This shift reflects a broader pattern affecting multiple creative industries, from Hollywood screenwriting to digital journalism and visual art. Artificial intelligence, which often learns from existing copyrighted materials, stands in a legal gray zone. Many industry leaders now recognize that AI’s transformative potential is too great to ignore and are opting to coexist with the technology rather than combat it indefinitely. As Chris Wares of Berklee College of Music notes, record labels are actively “futureproofing themselves,” preparing for a world where algorithmic creativity will be permanent and pervasive.

The explosion of AI-generated music has also contributed to a flood of content inundating streaming platforms. With over 100 million songs already on services such as Spotify, Apple Music, and Bandcamp, countless tracks receive little or no human attention. Deezer, a French streaming site, disclosed earlier this year that approximately 20,000 fully AI-created songs were uploaded daily, constituting nearly one-fifth of all new releases on the platform. Amid this oversaturation, human musicians must compete for visibility against machine-made music generated in staggering volumes. The case of The Velvet Sundown, a mysterious collective that swiftly garnered a million streams across two albums while displaying AI-generated band photos, demonstrates the bizarre opacity that now characterizes the modern digital music scene. By late November, Billboard had identified at least six AI or AI-assisted songs that managed to enter its official charts, underscoring how deeply algorithmic music has already penetrated mainstream culture. Reacting to these developments, Spotify updated its impersonation policy to ban songs that reproduce an artist’s voice without permission.

The rise of AI also raises questions about novelty, authenticity, and value. Suppose a viral AI mashup reunites legendary artists like Stevie Nicks and Lindsey Buckingham for a fictional Fleetwood Mac album or stages a digital truce between rappers Kendrick Lamar and Drake through an imagined collaboration. Such artificial creations might amass enormous streaming numbers due to sheer curiosity, diverting attention—and revenue—away from the artists’ true creative work. Nevertheless, the emerging partnerships between major labels and AI platforms suggest a potential compromise: allowing AI to serve as a new revenue channel rather than merely a disruptive force. The Warner-Suno agreement, for example, stipulates that only paying users will be able to download AI-generated audio and that artists must explicitly opt in before their vocals, likenesses, or songwriting styles can be reproduced. In theory, if properly managed, these stipulations could yield passive income for musicians in an era when traditional streaming royalties have dwindled sharply. However, the irony remains that artists may find themselves competing not only against new talent but also against algorithmic versions of their own sound.

Mulligan captures this paradox succinctly, arguing that no form of generative AI music can truly be “artist-first” in a finite economy of attention. Every second spent playing an AI composition is a second diverted from human artistry. At the same time, the accessibility of generative tools allows ordinary listeners to participate more directly in the creation of music, blurring the line between audience and artist. Historically, music was inherently communal—people performed it together to share stories and preserve cultural memory. Only in the modern era of mass distribution, recording, and broadcasting did a separation arise between creators and consumers. Now, as Mulligan puts it, AI may be restoring that sense of participatory expression, widening the creative funnel and inviting fans once again into the process of musical invention.

Some artists are embracing this participatory model. Grimes, for instance, has openly offered AI replicas of her voice for fans to experiment with, viewing it as an opportunity rather than a threat. Whether other mainstream pop icons will follow remains uncertain; many may be reluctant to authorize AI systems to vocalize words they never wrote or approved. Still, curiosity about these possibilities continues to grow. In response to the controversy surrounding “Walk My Walk,” Blanco Brown released his own “trailertrap” remix of the song—a symbolic act of reclaiming his creative style from the digital imitator that replicated it. While this remix has amassed only a modest few thousand streams so far, Brown’s sentiment is unmistakable: if anyone is going to sound like him, it should be him. As the music world moves deeper into this era of generative soundscapes, listeners may increasingly find themselves wondering whose voice they are truly hearing and what lies behind the songs that fill their headphones each day.

Amanda Hoover, a senior correspondent at Business Insider, continues to dissect these technological and cultural shifts, illuminating how they mirror larger trends shaping the modern creative economy. Her reporting illustrates a fundamental truth: in the age of AI, the distinction between invention and imitation is growing ever thinner, forcing the music industry to reconsider the essence of artistic authenticity and the future of sound itself.

Sourse: https://www.businessinsider.com/why-the-music-industry-is-changing-its-tune-on-ai-2025-12