At its annual Max conference, Adobe provided an in‑depth glimpse into a collection of experimental artificial intelligence tools that hint at the company’s vision for the future of digital creativity. These early‑stage prototypes, internally referred to as “sneaks,” are designed to radically streamline and expand the ways editors can interact with photos, videos, and audio files. By allowing creators to make complex adjustments almost instinctively—through natural prompts and intuitive controls—Adobe demonstrated how these technologies could soon remove many of the technical barriers that typically separate professional‑grade editing from everyday use.

Among these experimental projects, one of the most visually captivating demonstrations was **Project Frame Forward**. This system reimagines how video editors modify footage by allowing them to add or remove objects without relying on traditional masking techniques. Masking—normally a laborious, manual process that requires tracing and isolating subjects frame by frame—can be one of the most time‑intensive aspects of post‑production. Frame Forward eliminates that need entirely. In Adobe’s live preview, the software instantly identified and selected a woman appearing in the first frame of a video, effortlessly erased her image, and filled the space she occupied with a convincingly reconstructed background that maintained consistent texture and lighting. The automated fill resembled the behavior of well‑known Photoshop capabilities such as Context‑Aware Fill or Remove Background but with the added sophistication of applying seamlessly throughout an entire video sequence after only a few clicks. What once required hours of meticulous manual editing could now be achieved almost instantly.

The same project enables the reverse process as well—adding new elements instead of deleting them. An editor simply sketches within the frame where the object should appear and then describes the desired insertion through an AI prompt. The artificial intelligence interprets that description and integrates a new, context‑aware object across every frame of the video. Adobe’s demonstration emphasized how naturally the generated objects adapted to their environment: for instance, an artificially created puddle on the ground mirrored the reflection of a real cat moving throughout the scene, responding dynamically to motion, light, and angles. In other words, the AI not only inserted the element but also understood its relationship to the physical space and the behavior of the existing subjects.

Another remarkable innovation showcased at the event was **Project Light Touch**, a generative‑AI‑driven tool specifically crafted to manipulate illumination within still images. Traditional photographic lighting adjustment often demands a combination of physical reshoots or complex gradient mapping, but Light Touch redefines that workflow entirely. The system allows users to reshape and reposition light sources—changing the direction from which light falls, altering shadows, or even creating the illusion that unlit lamps in a photo were glowing all along. Users can control diffusion and intensity levels with precision, experimenting with softer ambient tones or sharply directional beams that mimic natural sunlight. Even more impressively, light sources can behave dynamically in real time; they can be dragged across the editing workspace to bend or wrap light around people and objects, simulate internal illumination—such as making a carved pumpkin pulse with inner glow—or transform the overall ambiance of the scene from bright daylight to moody nighttime. The color characteristics of these virtual lights can also be finely tuned to adjust warmth, saturation, or to produce vivid RGB‑style effects that dramatically change the emotional tone of an image.

Complementing these visual innovations, Adobe also presented **Project Clean Take**, a powerful experiment in generative audio editing that redefines vocal and sound correction. This tool uses artificial intelligence to refine speech recordings with a degree of flexibility previously attainable only through time‑consuming rerecording sessions. With Clean Take, editors can alter how a line is delivered—shifting the speaker’s tone from neutral to cheerful or inquisitive, for example—without losing the authenticity or unique vocal texture of the original performance. Furthermore, should a word or phrase need replacement, the tool generates a seamless substitute in the same voice, maintaining rhythm, tone, and cadence. Beyond speech correction, Clean Take introduces advanced noise separation: background sounds like chatter, traffic, or ambient music can be automatically disentangled into discrete layers, enabling precise control over each element. This process preserves the integrity of the voice while improving overall clarity and balance within the audio track.

The set of projects unveiled under the “sneaks” initiative extends even further. Among the additional highlights were **Project Surface Swap**, a tool that permits instant changes to the texture or material of any surface within an image or video—transforming a wooden tabletop into marble or metal with a single adjustment. **Project Turn Style** offers a way to rotate or pivot objects within still images as if they were fully realized three‑dimensional models, opening the possibility of adjusting perspective or lighting without recreating the entire shot. Meanwhile, **Project New Depths** introduces a new spatial understanding of photographs, allowing users to edit as though they were working inside a three‑dimensional environment. The software can intelligently determine when newly added items should appear partially hidden behind existing objects, automatically preserving realistic spatial relationships to the original composition. For those interested in exploring deeper details and visual examples, Adobe has made expanded explanations and previews available on its official blog.

It is important to note that these “sneaks” remain experimental; they are not yet available for public use, nor are they guaranteed to evolve into official offerings within Adobe’s Creative Cloud ecosystem or its Firefly suite of generative tools. However, there is a precedent that inspires optimism. Several of today’s widely used features—such as Photoshop’s **Distraction Removal** and **Harmonize** tools—originated as concept demonstrations before eventually maturing into core software capabilities. This historical pattern suggests that while these current experiments may change form or functionality, the fundamental technologies behind them are likely to surface in some capacity in future Adobe products. Together, these previews reveal a clear direction: Adobe is not merely exploring automation, but redefining creative control itself, merging human artistic intent with machine intelligence to accelerate and enhance every aspect of the editing process.

Sourse: https://www.theverge.com/news/811602/adobe-max-2025-sneaks-projects