logo

The Dawn of Synthetic Cinema : NavSar and the Making of the First AI Feature Film

The Dawn of  Synthetic Cinema : NavSar and the Making of the First AI Feature Film

The cinematic landscape is currently undergoing its most radical transformation since the transition from silent film to "talkies." At the heart of this revolution is Exoplanet, a 90-minute full-length feature film that isn't just about technology—it is born from it. Produced by NavSar.tech, an AI company with dual footprints in Geneva and Bangalore, the project serves as a high-stakes litmus test for the future of entertainment.

As the production unfolds, a central question emerges: Can Bangalore, India’s "Silicon Valley," leverage its deep tech roots to become the global hub for AI-driven filmmaking?

Navin Manaswi, a global AI domain expert and the engine behind NavSar, sees the project as a convergence of logic and lyricism. Manaswi, who frequently represents India at global AI forums, is currently overseeing the production of the space-themed epic within his specialized labs.

"We are not just using AI for 'VFX' or post-production," Manaswi explains. "We are treating AI as a co-creator that spans the entire lifecycle of the film—from generative scriptwriting and storyboarding to synthetic voice synthesis and neural rendering of extraterrestrial environments. The challenge is the sheer scale. Making a 15-second clip is easy; maintaining visual consistency, emotional depth, and narrative coherence over 90 minutes is the ultimate technological Everest."

For Manaswi, the choice of a space film is deliberate. Creating ExoPlanet allows the AI to flex its muscles in world-building, where physical laws can be subtly bent, and environments are limited only by computational power rather than location budgets.

Bridging the gap between traditional craft and futuristic tools is K B John, an FTII (Film and Television Institute of India) alumnus and veteran filmmaker. John has spent decades in the trenches of Mumbai’s film industry and now finds himself at the forefront of the AI frontier.

"Filmmaking has always been about the control of light and time," says John. "AI gives us a new way to manipulate those elements. By harnessing NavSar’s technologies, I can iterate a scene fifty times in an afternoon. In the traditional world, that would take weeks and millions of rupees. However, the 'Technology Challenge' is ensuring the soul of the performance remains. AI can give us the pixels, but we, the directors, must provide the pulse."

John emphasizes that the role of the director is shifting from "commander of a massive crew" to "curator of intelligent systems."

The visual integrity of a film depends on the eye of the cinematographer. For Sukumar Jatania, an FTII alumnus known for his work in prestigious James Ivory productions, the shift to AI-driven imagery is both a challenge to tradition and a gift to creativity.

"In the Merchant Ivory days, we obsessed over the natural light of a period room," Jatania reflects. "Now, with NavSar's generative engines, I am learning to 'prompt' light. I can ask for the soft glow of a dying star or the harsh, atmospheric refraction of a dual-sun system on a distant planet. The challenge is 'Digital Plasticity.' We must ensure the AI doesn't produce images that feel too perfect or 'uncanny.' We need the grit, the grain, and the happy accidents that make cinema feel real."

Similarly, Yogendra Panda, a celebrity cinematographer with a career spanning Indian and British cinema, views AI as a democratizing force for visual grandeur.

"Throughout my career, I’ve navigated the technical differences between Bollywood's vibrancy and the British aesthetics’ subtlety," Panda notes. "AI allows us to blend these sensibilities seamlessly. In the making of this space film, we are using AI to solve the 'Lighting Continuity' problem across thousands of frames. It allows a small team in a Bangalore lab to achieve the visual scale of a $200 million Hollywood blockbuster. If we master this, Bangalore won't just be a back-office for VFX; it will be the creative cockpit of the world."

Today, everyone’s talking about AI movies like they’re coming out next week, but honestly? The road to a real, seamless feature film is basically paved with some pretty gnarly technical walls. It’s easy to get caught up in the hype, but there are a few things that still make it feel like we're a long way off. Despite the optimism, the road to a seamless AI feature film is paved with significant technical obstacles.

The biggest issue—and seriously, it’s like the "Holy Grail" of the industry—is temporal consistency. It sounds fancy, but it basically just means making sure things don't "jitter" or change for no reason. Right now, you might have a character’s face or their shirt subtly shifting or morphing from one frame to the next, and it's super distracting. If a protagonist's jacket turns into a vest halfway through a scene, you’ve lost the audience.

Then there's what I'd call the emotional latency problem. AI is amazing at mimicry, but it still struggles with the real nuance of human feeling. It’s those tiny micro-expressions—like the slight quiver of a lip when someone's trying not to cry—that really sell a performance. AI can get close, but it often misses that "soul" and ends up feeling a bit hollow or robotic.

And don't even get me started on the compute barrier. People really underestimate the sheer amount of power you need for this. To render a full 90-minute movie in 4K or 8K using advanced diffusion models? The GPU power required is just staggering. Most creators just don't have access to the kind of massive server farms you'd need to crunch those numbers without it taking literal years to finish.

Navin Manaswi Co-Founder, NavSar

 

Bangalore stands at a unique intersection. It possesses the world-class software engineering talent required to build the underlying models, and it is increasingly attracting creative rebels from FTII and Mumbai who are tired of the traditional studio bottlenecks.

Navin Manaswi believes the city’s ecosystem is its greatest asset. "In Geneva, we have the precision and the global networking. In Bangalore, we have the 'Jugaad' (frugal innovation) and the raw engineering horsepower. When you combine the two at NavSar, you get a laboratory that can produce a film like Exoplanet for a fraction of the cost of a traditional studio."

The success of NavSar.tech’s venture could signal a shift where the "film studio" of the future looks less like a sprawling soundstage and more like a high-density data center.

The making of an AI feature film is not merely a technical exercise; it is the birth of a new medium. While the challenges are formidable—ranging from ethical concerns about synthetic actors to the technical rigors of temporal consistency—the collaboration between tech pioneers like Manaswi and seasoned artists like John, Jatania, and Panda suggests a balanced path forward.

If ExoPlanet succeeds, it will prove that AI is not the death of cinema, but its most profound rebirth. And at the center of that rebirth may very well be Bangalore—the city where code and creativity finally became one.

To achieve the 90-minute runtime for ExoPlanet, Navin Manaswi and his team at NavSar.tech are tackling "Temporal Consistency"—the ability of an AI to remember what it drew three seconds ago—using a multi-layered architectural approach. Standard AI video models often suffer from "morphing" (where a character’s face or suit details shift frame by frame). NavSar is reportedly implementing several cutting-edge "anchor" and "memory" technologies to prevent this.

When we talk about how NavSar actually handles the visuals, it all starts with Latent Space Anchoring and their specific flavor of Video LDMs. Basically, instead of the AI trying to draw every single frame from a blank canvas—which usually leads to a mess—the system operates within this "latent space" (basically a compressed mathematical representation of the image). They’ve fine-tuned these models specifically for video by sliding temporal layers right into the denoising process. The big "win" here is how it aligns those latent so that specific details, like the reflection on an astronaut's visor or the tiny bolts on a ship, stay exactly where they should be from one second to the next. Plus, since ExoPlanet is set in space, the team uses that harsh, high-contrast lighting as a sort of "visual anchor" to help the AI keep track of structural edges against the blackness of the vacuum.

To deal with that annoying "jitter" you see in a lot of AI video, the team is leaning hard on Temporal-Spatial Attention Mechanisms, or TSAM. I like to think of this as a digital short-term memory. It’s looking at the current frame and the previous frames at the same time to make sure things are moving logically. Like, if a star in the background suddenly "teleports" five pixels to the left for no reason, the TSAM catches it and penalizes the model. Navin Manaswi has mentioned that they’ve even baked "physics-informed" constraints into this. It’s pretty cool because it means movements in zero-G actually look fluid and weightless rather than just... chaotic or glitchy.

Then there is the Cycle-Consistency and Optical Flow Refinement part of the pipe-line. NavSar uses a trick called Temporal Cycle-Consistency (TCC), which is basically a round-trip check for the AI's homework. The AI predicts Frame B from Frame A (the Forward Pass), but then it has to try and "re-derive" Frame A back from the Frame B it just made (the Backward Pass). If the "re-derived" version doesn't match the original, the system knows it messed up the consistency and forces a re-render. It’s a rigorous way to make sure the motion stays "locked in" without drifting off into weird AI hallucinations.

Finally, for the actual characters, KB John and the tech leads are using something they call a Semantic Identity Lock, or "Character DNA." This is honestly super important because AI is notorious for changing someone's face halfway through a scene. Before they even start rendering, they create a "DNA" file—a high-res, multi-angle map of the actor's features. Every single time the AI generates a frame, it has to cross-reference this DNA file. It’s like a permanent anchor for the actor’s identity, ensuring that things like nose shape or eye color don't "drift" or change as the camera moves around them. It keeps the "soul" of the performance intact across the whole sequence.

The real hurdle here isn't just "the AI" in some abstract sense—it is what we've been calling the Compute-Creativity Loop. Trying to render a full 90-minute film with the kind of visual fidelity people expect today is a massive undertaking. To pull it off, NavSar is leaning heavily on some pretty intense tech, like these high-density GPU server banks over in Bangalore that are basically dedicated 24/7 to "Neural Rendering."

They are also doing this clever thing with Hybrid Upscaling. Basically, they generate the footage at a lower resolution so the AI stays consistent and doesn't "hallucinate" weird details, then they use something called "Upscale-A-Video" to bump it up to 4K. Manaswi explained it to me like they're building a "Temporal Bible." If a character has an oxygen tank with three red stripes, the TSAM (Temporal Segment Association Model) has to make sure those stripes don't turn into two stripes or a circle halfway through the scene. When you realize that's 129,600 frames to keep track of at 24fps... yeah, it's a lot of math.

When you actually look at the production budget for ExoPlanet, it’s wild how much they’re flipping the Hollywood script. In a normal Tinseltown space epic, you spend millions just moving heavy gear and feeding hundreds of people. NavSar’s logic is totally different: it's "Compute over Physicality." Instead of paying for hotel rooms in London or Atlanta, they’re buying raw processing power. The numbers are honestly kind of shocking. A big studio might spend $30M to $60M just on "Above-the-Line" costs—you know, the big stars and rights—but NavSar keeps that under $5M by sticking to core creative teams and AI experts.

The gap gets even bigger when you talk about the actual shoot. A blockbuster usually needs a crew of 500+ and costs maybe $70M to film. The AI-led approach? It’s more like $0.5M to $1.5M because you’re using a skeleton crew in a lab. Even the VFX, which is usually the most expensive part of a space movie (sometimes hitting $100M!), is transformed. By using their own GPU clusters and proprietary models, they’re getting similar results for maybe $4M max. It’s not just a cheaper way to make a movie; it’s a total reimagining of what a "studio" even looks like in 2026.

According to Navin Manaswi, the "Bangalore Advantage" comes down to killing "production friction." In a normal shoot, the "Day Rate" is a killer—you're burning $200k to $500k a day on catering, permits, and union labor. With AI, the "set" is just a high-performance lab. There’s no rain delays or 14-hour union turnovers. If you want to change the "Mars" landscape, you don't need a permit.

Sukumar Jatania also pointed out how they've traded physical sets for "prompted" light. Building a spacecraft cockpit used to cost $5 million easily. At NavSar, if Yogendra Panda decides he wants the lunar soil to look more like metallic basalt instead of grey, they just change a seed or a prompt. You don't have to call in a construction crew for a $50k reshoot. It’s what Manaswi calls "VFX Compression." In the old days, you’d have thousands of artists manually rotoscoping everything. Now, the "shot IS the effect." It drops the cost of a CGI shot by nearly 90%.

Of course, it’s not all "free." NavSar just traded one type of bill for another. They have huge "Infrastructure" expenses that Hollywood isn't used to yet. We’re talking about tens of thousands of H100 GPU hours and a massive upfront R&D investment to make sure the actors' faces stay consistent throughout the film. It's a different kind of burn rate, but one that actually ends up on the screen.

"The budget doesn't disappear; it just migrates," says Navin Manaswi. "Instead of paying for 5,000 hotel room nights, we are paying for 5,000 terabytes of high-speed data throughput and the specialized engineers who can guide the latent space."

 


Sarat C. Das 
(The content of this article reflects the views of writer and contributor, not necessarily those of the publisher and editor. All disputes are subject to the exclusive jurisdiction of competent courts and forums in Delhi/New Delhi only)

 

Leave Your Comment

 

 

Top