Industry Trends
8 min read
AI Image Generation in Film and Advertising: The 2025 State of Play
by
James Weller

From experiment to infrastructure
Two years ago, AI image generation in professional media was a side project — something a few forward-thinking studios were exploring while the bulk of production continued on traditional tracks. Today, it is infrastructure. The question at major studios is no longer whether to use AI image generation tools but which tools to integrate at which stage of the production pipeline.
The shift happened faster than most industry observers predicted, driven not by a single transformative release but by a sustained sequence of model improvements that collectively pushed output quality above the threshold where VFX supervisors and creative directors were willing to trust it. When the quality risk drops below the efficiency gain, adoption accelerates — and that crossing point arrived for AI image generation sometime in late 2023.
De-ageing and face work enter the standard toolkit
The clearest example of AI's integration into professional production is de-ageing. What previously required weeks of frame-by-frame manual work by specialist VFX artists can now be accomplished in days using DeepFaceLab-class models combined with proprietary colour grading pipelines. The cost reduction is significant enough that studios are applying de-ageing techniques to footage that would previously have been cost-prohibitive to alter.
Face replacement for stunt work has followed a similar trajectory. The ability to replace a stunt performer's face with the principal actor's in post — with results that hold up at theatrical resolution — has changed the calculus of what is feasible on a given budget. These are not experimental capabilities being explored in isolated projects; they are standard line items in production budgets at major studios.
Advertising discovers what AI actually makes possible
The advertising industry's adoption of AI image generation has produced something genuinely new rather than simply replacing existing workflows. The most significant development is not cost reduction — it is the ability to produce localised campaign variants at a scale that was previously logistically impossible. A single global campaign can now generate hundreds of regional variants with culturally appropriate faces, settings, and visual styles, at a cost that makes hyper-localisation economically viable for the first time. For cosmetics and personal care brands in particular, the ability to render the same product across a wide range of skin types, tones, and face structures in a single production run has transformed campaign development. Creative directors who once had to choose which audiences to represent in a campaign are now able to represent all of them simultaneously, with consistent production quality across every variant.
The regulatory gap and how studios are responding
The rapid commercialisation of AI face generation has moved substantially faster than the regulatory frameworks meant to govern it. Performer likeness rights, consent requirements, and disclosure obligations are all being actively litigated and legislated across multiple jurisdictions simultaneously, creating a patchwork of obligations that is genuinely difficult to navigate at global production scale.
Studios that are handling this well have recognised that waiting for regulatory clarity is not a viable strategy. Instead, they are building consent management, usage logging, and content provenance into their AI image workflows now — creating audit trails that will satisfy whatever specific requirements ultimately emerge. The studios that are not doing this are accumulating legal exposure that will become increasingly difficult to manage as regulations crystallise.
Real-time generation and what comes after
The capability that will define the next phase of AI image generation in media is real-time output — models fast enough to generate consistent character faces during live broadcast, interactive streaming, or video call environments. Early production-quality prototypes are running at 30 frames per second on consumer-grade GPU hardware in controlled conditions, and the trajectory of improvement suggests this will be a broadly available capability within 18 months. When real-time AI face generation reaches production quality at scale, the boundary between pre-generated and live-generated content will become effectively invisible to viewers. That convergence creates extraordinary creative possibilities and equally serious challenges for media authenticity and boost your audience trust.




