Introducing Machine For Film
Imagine a machine that generates films that are creatively rich, that make you laugh, cry, gasp and dream, and that you consider genuinely great art.
This is what we’re trying to build with this experimental project.
We are developing a fully automated workflow that combines the latest AI models to produce fully-fledged videos — with moving images, sound, music, characters, locations, narratives — all the elements that make film and TV as we know it. Every aspect of the workflow is automated. We produce films from beginning to end using a single prompt, and with no further human intervention.
Right now, the outputs are interesting but imperfect. We can generate films of any length with coherent narratives, images, music and narration, but the outputs are simple, constrained, contain inconsistencies, and are not fully compelling, at least compared to traditional film and video.
We are improving the machine week-on-week to generate films that are higher quality, more consistent and more compelling, and we hope in the coming months to be able to generate outputs that people would genuinely want to watch, and not because they are examples of AI-generated film.
We are sharing daily examples of outputs from the workflow on our X feed, and will share more detailed thoughts on this blog, including reflections on what is — and what is not — working and details of the underlying technologies. As we continue to develop the workflow, we welcome any feedback and suggestions either here or on X.
The workflow
As of August 2025, the workflow is:
- Operational.
- Producing videos of any length with moving images, music and voice narration.
- Designed in a modular way to allow for different workflows and combinations of AI models.
The workflow has a basic but functioning front end that allows users to input a single prompt and select key backend parameters, including video length, style, specific AI models etc. The backend includes a range of AI modules that generate components of the final output and that are combined using an overarching orchestrator. We will share more details of this design in the coming weeks and months.
Looking ahead
We have two near-term goals when developing the workflow:
- Improve output quality: We aim to continuously improve the quality of the videos generated by our workflow by adjusting prompts, model selection, model parameters, workflow design etc.
- Improve workflow design: There are lots of aspects of the workflow design that do not contribute directly to output quality, but are nonetheless important. Examples include speed, cost, flexibility etc.
Looking further ahead, our medium-term goal is to build a machine that generates narrative videos that meet the minimum requirements for a useful creative product. And by this, we specifically mean a machine that generates video that people genuinely want to watch, irrespective of the underlying mechanics. This will likely be in more constrained creative spaces to begin with i.e. those that are especially well-suited to AI-generated film, but over time we hope to expand this into more creative and experimental areas.
Our longer-term goal is to build a machine that advances creative expression into new and compelling areas. Much of the current focus in AI film is on making traditional filmmaking cheaper and faster, but we believe the opportunity is much bigger, more interesting and more diverse. What this will be exactly, we don’t know yet, but we have some ideas. But what we also believe is that building fully automated workflows in the near-term is an initial route into this future world.
We know the outputs from the workflow are not there yet, but we also believe that through continuous improvement, we will move closer and closer to this new creative world. If Tuesday’s video is better than Monday, and Wednesday’s video is better than Tuesday, we hope that December’s will be better by an order of magnitude.
We believe these incremental improvements will lead over time to a significant improvement in quality, meeting creative benchmarks and opening doors to new creative domains.
We look forward to sharing this journey with you!

George Kenwright
George is video producer specialising in the intersection of generative AI and video production. He is currently a Creative Producer at Google DeepMind and founder of Morning Star AI, an AI video production company. He has worked with clients including ITV Studios, Amazon, Sky Arts and Google.

Richard Flint
Richard is a filmmaker, data scientist and software developer. He previously led the AI R&D team in Deloitte Ventures, and has been a data scientist and technical researcher at Deloitte and the RAND Corporation. He has helped create several award-winning short films, primarily as Director of Photography.
Week 4 Videos
Prompt for this week: “Simple stories about going back-to-school but where the characters are not kids.”
Theme for this week: This week we focused on refactoring the code base to reduce technical debt and improve the readability and extensibility of the code. We used Google’s nanobanana for image generation and Runway’s Gen4 Turbo for image-to-video animation.
Evening Rise
Circuit of Empathy
The Ember Scholar
Ollie’s Big Splash
Chromatic Confluence
Week 3 Videos
Prompt for this week: “Simple stories about someone who goes in disguise to achieve something.”
Theme for this week: The focus was further improving visual continuity and quality. We added two changes:
(1) an “all-at-once” approach that generation outputs (e.g. image descriptions, prompts etc) using single prompt (e.g. generate all shot for this video) rather than multiple repeated prompts (e.g. generate the shot description for shot 1 etc).
(2) image references for characters and locations that are combined with prompts for image generation.
The Neon Janitor
The Jester’s Masquerade
Echoes of Vengeance
Bake of Deception
Echoes in the Fog
Week 2 Videos
Prompt for this week: “Short film ideas about a duck that goes on an adventure that changes its appearance at some point e.g. it gets muddy, its feathers change colour etc.“
Theme for this week: This week we want to test visual continuity further, and so we have kept the stories and visual prompts simpler and more similar. We have also focused on 3D images to test whether image-to-video is better with 3D-style images.
Feathers of Mud
Pip’s Chromatic Escape
The Luminous Quest of Aurelia
Metallum Flight
Glow Beyond the Pond
Week 1 Videos
Prompt for this week: “Short films about a creature (e.g. human, animal, mythical creature, robot, etc.) finding an egg which then hatches and then something happens.”
Theme for this week: No strict theme as its the first week, mostly using higher performing models and giving the machine as much creative control as possible.
Nimbus and the Nuisance
The Veiled Bloom
The Chanterelle Catastrophe
Eggsecutive Order
Ember Alley’s Guardian
Contact us
Contact George and Richard either using the form below, or via our X (Twitter) feed, or directly to our email address: [email protected]