Introducing Shot Generator
ShortPlay Studio Team
4/14/2026
Shot Generator is the production workflow we are building at ShortPlay Studio for teams that need to move from script to visual output without stitching together five separate tools.
At a high level, the product is organized as a six-step pipeline:
- Upload a script
- Define a global style
- Extract and generate reusable assets
- Analyze each shot
- Generate storyboard images
- Generate shot videos
That structure is not just a UI choice. It is reflected across the current application architecture, API routes, and persistence model in this codebase.
What the product already does
The current system is designed around a practical studio workflow rather than a one-off image prompt box.
You start with a script file or script text. From there, the app creates a project, stores the script, and prepares the material for storyboard generation. After that, the workflow moves into global style definition, where a single visual direction can be applied before generating downstream assets and shots.
The next stage is asset development. Instead of treating every image request as isolated, the system extracts reusable production elements such as characters, scenes, and props. That gives the rest of the pipeline a more stable base for shot generation.
From there, Shot Generator deepens each storyboard entry with visual prompts, generates shot images, and finally supports video generation as a separate step. In the current project structure, image and video generation are handled through dedicated task endpoints so the workflow can keep track of progress across refreshes and retries.
Why this matters
Most AI creation tools are optimized for single outputs. Real production work is different. A team usually needs continuity across multiple shots, reusable assets, and a way to recover work without re-running the same expensive job.
That is why the current implementation puts real effort into task persistence and workflow recovery.
The V2 system in this repository moves the product toward a database-driven model:
- asset analysis is versioned
- generation work is tracked as business tasks
- active work can be restored after refresh
- duplicate runs are reduced by design rather than cleaned up later
This makes the tool more suitable for longer-running creative sessions, especially when a project spans multiple scenes and multiple generations.
What we are building toward
The direction is clear: a single place where a creator can go from raw script to production-ready visual planning.
That means:
- clearer project structure
- reusable assets across shots
- more predictable generation behavior
- better recovery for in-progress work
- a tighter bridge between script thinking and visual output
The goal is not only to generate attractive images. It is to make visual development more organized, repeatable, and production-friendly.
What is next
We will keep improving reliability, generation continuity, and the handoff between each step of the workflow.
The current codebase already shows the shape of that system: project records, storyboard entities, asset analysis runs, generation task tracking, and separate APIs for script analysis, asset extraction, image generation, and video generation.
This is the foundation for a tool that helps creators move faster without losing control of the process.
If you are exploring AI-assisted pre-production, story visualization, or lightweight content pipelines, this is the direction we are building for.