TwinStage 3D Event Planner
Simulate first, build later.
An AI-assisted spatial planning system that allows event organizers to design real-world venues using natural language.
Overview
Large organizations regularly host events such as:
- annual conferences
- corporate celebrations
- training sessions
- exhibitions
- award ceremonies
Despite their scale and budget, one critical part of the workflow is still surprisingly manual: spatial layout planning.
Event planners must coordinate many elements inside a single venue:
- stage placement
- seating arrangements
- registration desks
- catering areas
- sponsor booths
- accessibility paths
- emergency exits
Today this is often done using 2D floor plans or spreadsheets. However, the real problems usually appear only on setup day.
Aisles are too narrow. Chairs block sightlines. Equipment blocks emergency exits.
Fixing these issues on-site is expensive, stressful, and sometimes unsafe. 3D Event Planner was built to solve this.
Instead of drawing layouts manually, planners describe their intent:
“Arrange 200 chairs facing the stage.” “Leave a central aisle.” “Keep all emergency exits clear.”
The system interprets the instructions and constructs the layout in a live 3D environment.
Inspiration
The idea came from observing real event preparation.
Before an event, planners visit a venue with a printed floor plan and try to imagine how everything will fit. But spatial reasoning in 2D drawings is difficult. Many practical issues only become visible during physical setup:
- inefficient walking paths
- poor visibility to the stage
- overcrowded seating
- unsafe exit clearance
The cost of discovering these problems late is high. At the same time, large language models have become very good at interpreting human instructions.
Instead of using AI to generate images, this project explores a different approach: AI as a spatial planning assistant that operates a simulation.
The system does not generate a picture, it builds a working environment.
How It Works
The project is a browser-based 3D planning environment built using:
- React + TypeScript
- Three.js rendering engine
- Structured command execution engine
- Natural language interpretation via LLM(Gemini 3 Pro)
The architecture separates reasoning from execution.
Natural Language → Structured Commands
The AI does not directly manipulate the scene. Instead, it generates structured commands:
{
"commands": [
{
"type": "arrayLayout",
"assetId": "chair_01",
"rows": 10,
"cols": 20,
"spacing": [0.55, 0, 0.55],
"facing": "stage"
}
]
}
The editor then deterministically executes these actions. This ensures:
- predictable behavior
- undo/redo support
- validation
- safe sandbox execution
AI suggests a plan. The engine performs verified actions.
Spatial Mathematics
This project performs spatial reasoning, not just visualization.
Transform Hierarchy
Every object has a local transform composed of translation, rotation, and scale. When objects are grouped (for example, chairs grouped into a row), their world position depends on the parent transform:
P_world = T_parent * T_child * P_local
This allows an entire row of chairs to move as a single unit.
Unit Mapping
Event planners think in real-world measurements such as meters. To keep layouts predictable, the engine enforces:
1 scene unit = 1 meter
Therefore, instructions like:
“Leave a 1.5 meter aisle”
can be evaluated directly in the simulation.
Collision Detection
To ensure safety and accessibility, the system prevents overlapping furniture. Each object uses an axis-aligned bounding box (AABB). Two objects collide when:
(A_min_x <= B_max_x) AND (A_max_x >= B_min_x)
AND
(A_min_z <= B_max_z) AND (A_max_z >= B_min_z)
This automatically detects:
- blocked walkways
- inaccessible areas
- overcrowded seating
Clearance Constraints
The planner enforces real-world layout rules:
- aisle width ≥ 1.2 m
- emergency exit clearance ≥ 2.0 m
- stage safety buffer ≥ 1.0 m
For an aisle of width w:
w = x_right - x_left
and the system validates: w >= 1.2. If violated, the AI instruction is rejected and a corrected layout is requested.
What I Learned
Language Is a Better Interface Event planners think in logistics, not transforms or matrices. Natural language is more intuitive than traditional 3D editing tools.
AI Should Interpret, Not Execute Direct AI manipulation caused unpredictable scenes. A structured command system made the system stable and reliable.
Simulation Prevents Real-World Risk The main value is not visualization, it is preventing costly setup mistakes before the event begins.
Challenges
Ambiguous Human Instructions
Instructions like:
“leave enough space”
have no numeric meaning. The system required default spatial rules and constraints.
Coordinate Systems
Graphics engines use arbitrary units. Humans use meters. Standardizing units was necessary for realistic planning.
Performance
Large venues may include hundreds of objects. Solutions included:
- instanced meshes
- bounding-box caching
- optimized rendering updates
Trust & Safety
The system must never create impossible layouts. Therefore:
- AI proposes actions
- engine validates them
- invalid layouts are rejected
Conclusion
3D Event Planner introduces a new workflow:
Plan the event in simulation before building it in reality.
By combining natural language interaction and spatial simulation, planners can evaluate and refine layouts before setup day.
This project explores a broader idea: conversational spatial computing, interacting with real environments through dialogue instead of manual editing tools.
The future interface to 3D spaces may not be a mouse or a toolbar. It may simply be a conversation.
Built With
- build
- codemirror
- css
- drei
- html5
- javascript
- python
- react
- react-resizable-panels
- react-three-fiber
- statemanagment
- tailwind
- three.js
- typescript
Log in or sign up for Devpost to join the conversation.