Google DeepMind Opens Project Genie for Real-Time AI World Creation
Google DeepMind has opened access to Project Genie, an experimental world-building AI tool, as it looks to gather real-world feedback and accelerate progress on the world models it believes are central to the path towards artificial general intelligence.
What Project Genie Is and How It Was Built?
Project Genie is a web-based experimental research prototype developed by Google DeepMind that allows users to generate and explore interactive virtual worlds using text prompts or images. Technically, it is not a standalone model but a front-end experience built on top of several of DeepMind’s most advanced systems.
At its core is Genie 3, DeepMind’s latest general-purpose world model, which generates environments frame by frame in real time as users move through them. This is combined with Nano Banana Pro, an image generation model used to sketch and refine the initial appearance of a world, and Gemini, which handles higher-level reasoning and prompt interpretation. Together, these components allow Project Genie to turn a static description or image into a navigable environment that responds dynamically to user actions.
How Do You Use It?
Practically, users begin by creating what DeepMind calls a “world sketch”. This involves prompting the system with a description of an environment and a character, choosing a first- or third-person perspective, and optionally refining the generated image before entering the world. Once inside, the environment expands in real time as the user moves, with the model simulating basic physics, lighting, and object behaviour. Users can also remix existing worlds, explore curated examples, or download videos of their explorations.
Project Genie was built by DeepMind researchers including Jack Parker-Holder and Shlomi Fruchter, both of whom have been closely involved in the development of Genie 3 and earlier world model research.
DeepMind
Google DeepMind is the name for Google’s dedicated AI research lab, formed through the merger of DeepMind and Google Brain, and is focused on developing general-purpose AI systems. Its long-term stated ambition is to build AI that can reason, plan, and act across the full complexity of the real world, rather than being limited to narrow tasks.
Genie 3 Previewed Back In August 2025
DeepMind first previewed Genie 3 as a research model back in August 2025, positioning it as a major step forward in interactive world simulation. Five months later, the decision to open Project Genie to a wider audience appears to reflect a deliberate transition from closed research testing to broader, real-world experimentation.
In its own recent announcement, Google stated that “the next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation.” Access is currently limited to Google AI Ultra subscribers in the United States aged 18 and over, reinforcing that this is still a controlled research rollout rather than a mass-market launch.
Why Now?
It should be noted here that the timing matters. For example, world models are moving from abstract research concepts into systems that can be directly experienced and evaluated by users. Therefore, by opening access now, DeepMind is hoping to be able to collect feedback, usage patterns, and behavioural data that are difficult to obtain through internal testing alone, while also demonstrating tangible progress in a competitive and fast-moving field.
What Genie Can Do and Who It’s Aimed At
Genie 3 enables real-time interaction at around 24 frames per second, with worlds that remain visually consistent for several minutes. Unlike traditional video generation models that produce a fixed sequence, Genie 3 generates each new frame based on what has already happened and how the user moves, allowing for exploration rather than playback.
Project Genie is actually aimed at several overlapping audiences. For example, in the near term, it is most accessible to creators, researchers, and technically curious users who want to experiment with AI-generated environments. The tool supports whimsical and stylised worlds particularly well, including animated, illustrative, or fantastical settings.
Beyond creative exploration, DeepMind also appears to see some real value in Genie 3 for education, simulation, and research. World models can be used to train and test embodied agents (AI systems designed to act within an environment), including robots or software agents that move and make decisions. Instead of learning in the real world, where training can be expensive, slow, or risky, these agents can practise inside simulated environments. For example, an AI-controlled robot can learn how to navigate difficult terrain or react to unexpected situations without any physical risk or real-world consequences.
DeepMind described world models as systems that “simulate the dynamics of an environment, predicting how they evolve and how actions affect them,” framing Genie 3 as part of a broader capability rather than a single product feature.
How Project Genie Fits Into DeepMind’s AGI Strategy
World models now appear to occupy a central position in DeepMind’s vision for AGI (artificial general intelligence), which are AI systems that can understand, learn, and reason across a wide range of tasks rather than being limited to a single narrow function. The lab has argued that this kind of intelligence requires an internal model of the world that supports planning, prediction, and counterfactual reasoning. In practical terms, this means being able to ask “what happens if” and simulate possible outcomes before acting.
Genie 3 builds on earlier models such as Genie 1 and Genie 2, but adds real-time interaction and longer-horizon consistency. This allows agents to execute longer sequences of actions and pursue more complex goals, which DeepMind sees as essential for general-purpose intelligence.
The company has already demonstrated Genie 3 being used to generate environments for SIMA, its generalist agent for 3D virtual settings. This reinforces that Project Genie is not the end goal, but a way to expose and test the underlying capabilities that future agents will rely on.
The Competitive Landscape and Why Timing Matters
The release of Project Genie comes as competition around world models is intensifying, with several AI labs and startups racing to build systems that go beyond static generation and towards interactive simulation.
For example, Runway has recently introduced its own world model concepts alongside its video tools. Also, World Labs, founded by Fei-Fei Li, has launched Marble as a commercial product aimed at interactive environments. Yann LeCun’s AMI Labs has also signalled a strong focus on world modelling as a foundation for intelligence.
By opening access now, DeepMind is hoping to position itself as a leader not just in theory, but in demonstrable, hands-on systems. This visibility matters for attracting talent, shaping industry standards, and influencing how developers and researchers think about the future of AI simulation.
Limitations, Guardrails, and Why This Is Still a Prototype
Despite its capabilities, Project Genie is explicitly framed as an experimental research prototype. Usage sessions are currently limited to 60 seconds of world generation and navigation, reflecting the heavy computational cost of auto-regressive real-time models.
With this in mind, Google has acknowledged several known limitations. For example, generated worlds may not closely match prompts or real-world physics, characters can be difficult to control, and latency can affect interaction. Some Genie 3 capabilities announced in August, such as promptable world events that change environments mid-exploration, are not yet available in Project Genie.
DeepMind has also been quick to emphasise responsible development. For example, safety guardrails restrict copyrighted content, realistic depictions of certain subjects, and other sensitive material. The company stated that “as with all our work towards general AI systems, our mission is to build AI responsibly to benefit humanity.”
These constraints help explain why Project Genie is not being positioned as a consumer product or game platform, but is currently a testbed designed to surface technical weaknesses and user expectations before wider deployment.
Entertainment Today, Embodied Agents Tomorrow
In the short term, Project Genie’s most obvious use is entertainment and creative experimentation. Its strengths in stylised, animated, and imaginative environments make it well suited to playful exploration and concept development.
However, longer term, DeepMind’s ambitions extend far beyond games. World models offer a scalable way to train embodied agents, including robots and autonomous systems, in simulated environments that mirror the complexity of the real world. This could reduce costs, improve safety, and enable faster iteration across industries such as logistics, manufacturing, and healthcare.
The same technology could also support training, education, and scenario planning, where exploring “what if” situations is valuable.
Business and Industry Implications
For Google, Project Genie is intended to reinforce its position at the frontier of advanced AI research and supports the premium value proposition of its AI Ultra subscription. It also strengthens Google’s influence over how world models are commercialised and evaluated.
For competitors, the move appears to raise the bar for what qualifies as a leading-edge AI system, increasing pressure to demonstrate interactive, real-time capabilities rather than static outputs.
For businesses and developers, Project Genie offers an early glimpse into tools that could reshape simulation, training, design, and creative workflows. At the same time, its limitations highlight that world models are still an emerging technology with unresolved challenges around realism, control, and cost.
For the wider AI market, the release highlights a broader transition from generative content towards generative environments, where interaction and agency matter as much as visual fidelity.
Challenges, and Criticisms
It should be noted that some key challenges remain for world models like Genie 3, particularly around scalability, realism, and controllability. For example, auto-regressive world generation is computationally expensive, which makes long-duration or large-scale simulations difficult to run. Critics have also questioned how quickly these systems can achieve reliable real-world accuracy, especially for safety-critical applications where errors or inconsistencies could have serious consequences.
There are also broader concerns around data use, intellectual property, and the environmental cost of large-scale compute. DeepMind’s cautious, limited rollout reflects an awareness of these issues, even as it pushes the technology forward.
Project Genie, as DeepMind presents it, is not yet a finished destination but a visible step in a much longer journey towards AI systems that can understand and navigate the world in ways that begin to resemble human reasoning.
What Does This Mean For Your Business?
Project Genie shows how world model research is now being tested outside the lab, with DeepMind deliberately exposing early capabilities to real users in order to gather feedback that research alone cannot provide. The limited access, short session lengths, and strict guardrails make it clear that this is about learning and validation rather than product launch.
For UK businesses, the immediate value is not in using Project Genie directly, but in what it signals. Interactive simulation has long-term relevance for training, design testing, robotics, and scenario planning, particularly in sectors where real-world experimentation is expensive or risky. As these models improve, they could become a practical tool for reducing uncertainty before decisions are made in physical environments.
For the wider AI market, the release raises expectations around what advanced AI systems should be able to do. The focus appears to be shifting from static content generation to interaction, consistency, and decision-making over time. Project Genie does not solve those challenges yet, but it does show more clearly how DeepMind is approaching them and increases pressure on competitors pursuing similar world model capabilities.
Sponsored
Ready to find out more?
Drop us a line today for a free quote!