In partnership with a beloved, international multimedia conglomerate known for its expansive and intricately connected cinematic universe, our team at Accenture was brought in to explore how AI could revolutionize content continuity and production support.
One of our core deliverables was a first-of-its-kind AI system designed to track and manage the complex web of storylines, characters, and events across the client’s cinematic universe. The goal was to maintain continuity across films and shows—supporting writers, producers, and directors as they developed future content. The tool was so groundbreaking that it was presented (under NDA) to select insiders at the client's annual Expo.
In parallel, we also developed a voice isolation and facial visibility AI program for international dubbing workflows. Previously, the client had to send heavily watermarked video to overseas partners for lip dubbing, which—while secure—made it hard for voice actors to see the actors’ mouths and emotions. Our AI tool could identify who was speaking in a scene and isolate them on screen, blurring or masking everything else—preserving content security while providing the visual clarity needed for quality dubbing.
This solution ultimately led to a patent being issued to Accenture—solidifying the project’s technical and strategic value.
This was, without question, one of my favorite projects. I had the chance to work with a company I’ve loved since I was a kid, and the experience felt both nostalgic and futuristic at the same time. One of the highlights was facilitating a tech demo for a senior VP from the client company, a moment that reminded me just how far I’d come and how impactful our work could be.
What made this project so special was the level of creativity and innovation involved. We weren’t just optimizing existing systems, we were building something brand new, something that would directly influence how the movies and shows I love are created. That’s a rare and exciting feeling.
I also see so much untapped potential in the tools we developed. I’ve watched plenty of dubbed films where the audio and mouth movements feel out of sync, and it’s jarring. I believe this technology can go beyond content protection and actually enhance the viewer experience, especially for non-English-speaking audiences. Imagine subtle AI-assisted lip adjustments that make dubbed content feel more seamless and immersive.
To be clear, this isn’t about replacing humans. It’s about augmenting creativity and enabling artists and storytellers to deliver richer, more inclusive content experiences across languages and cultures.