I'm not a game developer myself, but some of my favorite games carry a deep sense of intentionality. For instance, there is typically not a single item misplaced in a FromSoftware game (or, for instance, Lies of P -- more recently). Almost every object is placed intentionally.
Games which lack this intentionality often feel dead in contrast. You run into experiences which break immersion, or pull you out of the experience that the developer is trying to convey to you.
It's difficult for me to imagine world models getting to a place where this sort of intentionality is captured. The best frontier LLMs fail to do this in writing (all the time), and even in code, and the surface of experiences for those mediums often feel "smaller" than the user interaction profile of a video game.
It's not clear how these world models could be used modularly by humans hoping to develop intentional experiences? I don't know much about their usage (LLMs are somewhat modular: they can produce text, humans can work on it, other LLMs can work on it). Is the same true for the video output here?
All this to say, I'm impressed with these world models, but similar to LLMs with writing, it's not really clear what it is that we are building towards? We are able to create less satisfying, less humane experiences faster? Perhaps the most immediate benefit is the ability for robotic systems to simulate actions (by conjuring a world, and imagining the implications).
In general, I have the feeling that we are hurtling towards a world with less intentionality behind all the things we experience. Everything becomes impersonal, more noisy, etc.
Making a world internally consistent by explicit placement gets harder as you increase in scale. When internal consistency is a factor impacting quality, there is a scale at which generated content eventually becomes the higher quality solution.
Secondly, when generating content with AI, the same rules around carelessness apply. There are certainly generative AI tools out there that offer few options when it comes to composing what you want, that is not a necessary part of AI, some of it is because people are wanting rudimentary interfaces, some of it is that the generators are sufficiently new that the control mechanisms are limited because they are focused upon doing something at all before doing it highly controlled, in some ways the problem is that things are new enough that it can be hard to describe what is desirable controllability, making the generator to see what people would like it to be able to do is, I think, a reasonable path to follow prior to creating the control that people want. Part of it is also that there _are_ tools that give a high level of control over what is generated but far fewer people get to see them. There are ways to control styles, object placement, camera motions, scene compositions, etc. The more specialised you get, the smaller the subset of people who need that specific control.
I think AI can make things possible for people who could not have done so without them, but it's still going to take care to make something special.
It seems to, even.
Whereas if you hand a router to someone with a flush trim but in it and ask them to clean up the edge of a table they will take one look at it and nope away from that dangerous spinning thing.
If they have the mind to give it a shot and despite a quality tool and bit they bite into the table and ruin the line (or something much worse) no one will be surprised—-they have no experience or recognition of what expertise is in woodworking.
But with AI, it is much more hazy what expertise is.
The methodology for quality results is changing each week and the articulation in personal tooling involved makes it challenging to adopt another “expert”’s workflows.
Yes, exactly. Inundate the world with superficially plausible yet hollow content, including any desired themes. People who aren't very discerning won't complain; the others will be outmatched and find that 99/100 pieces are all noise and they will need to spend increasing amounts of time trying to find the 1, if they can.
I think there are some good parallels with Amazon: the broken sorting and manipulated unit pricing, coupled with the avalanche of cheap clones pushes users to give up and just buy one of the top listed products (a featured listing/Amazon-clone). If you do a web search for various products and go to images, Amazon product links often take up 50-90% of the results.
Put another way, the average game quality will go down, but the actual rate of "Great" games will go up.
I take raw material and make something out of it with a circular saw, largely unrestrained by anything other than cost, skill, and material.
With a microwave, I make things hot so I can eat them.
Aside: Also, I wonder why that is? Why do we regard the microwave as "degenerate" compared to the oven? Why is baking seen as a calling while microwaving is, well, not? Is it the ease of the microwave makes the effort less impressive? Maybe it is that you can't achieve certain effects like browning? Is it because of it's 1970's association with "radiation" and tv dinners? Is it just cultural inertia?
Proper microwaving is what gave rise to the entire concept of "fast casual" restaurants, famously AppleBees (or "club B's" in the late night focus iterations!)
Complex entrees that could be partially cooked and frozen. Then rapidly microwaved on a custom program that varies the timing and intensity of cooking. Then finished on a grill or conventional heat source for less than 1 minutes.
Microwaving food generally produces a lower quality finished product. But you can take a similar approach at home. The short cut is to just double the cooking time and cook at 50% power. Then throw whatever the item is in a preheated pan for about 1 minute if it's applicable. Other variations are possible too, I air fry finish most things like chicken nuggets, tater tots etc and the difference is considerable while still offer a significantly reduced cooking time.
I have used none of those recipes. The microwave is for making cold pizza 10% more palatable (or 80% more palatable if I've been drinking). In that regard, the LLMs are microwaves really works for me: if I'm using one I either I want something fast and casual, or I'm drunk.
Why do I need more slopware? I have an entire Steam library of excellent games that deserve to be played first.
In reality, I don’t see any of this trending towards the theoretical happy path everybody always talks about. Most people give up trying to find something good on Amazon and just buy whatever vaguely plausible knock-off garbage shows up in the first few search results. Most people just take any job interview they’re offered even if it sucks. Most HR people don’t use it to enhance the quality of their decisions — it replaces their decision-making roles in many respects.
I’m an art school graduate and talk am in many art discussion communities. This is causing a massive industry-wide morale crater. In any sort of art, it damn near eliminates the reward of craftsmanship in favor of marketing useless trend-of-the-week bullshit. Far fewer people enter a market that can’t sustain them. The idea that this is going to create ‘more artists’ and therefore that must mean there must be more skilled artists is fantasy. The skills you learn by prompting are not even on the same track to learning how to create things yourself. You essentially become a high-school intern acting as an art director, commissioning pieces. It’s instant gratification for people who don’t care enough about something to learn how to do it for real.
That's a pretty specific and one-sided example. There are tons of good games that don't rely on elaborate item placement (e.g. many Bethesda games are great because most items are useless decorations, they broke that rule in recent games, giving the purpose to clutter, and it made them a lot worse). There are tons of good games not relying on this intentionality at all, they're either literally random cool ideas thrown at the wall, or even procedurally generated.
So, what's the deal?
As with ANY work in life, the quality of the result is a direct reflection of care and intention behind it. Simplified, it's a reflection of how much _you_ put effort in it. It always shows. Even in AI day and age. It's just that path to a result (without effort) is now way shorter so volume is showing up and diluting the overall impression. The latter kind of cheapens every field it touches, so even more effort will have to be put in to show up on the radar.
The other is creating multi-modal models with a better understanding of our world. LLMs often fail at incredibly basic spatial reasoning ("someone left a package in front of your apartment, describe going there", or the "should I drive to the car wash or go there", etc). World models excel at these kinds of things (in theory). They develop a great understanding of physical spaces, object interactions, etc. They can simulate fluids, rigid body physics etc. You "just" have to get really good at making world models, then somehow marry them with an LLM in a way that ensures the LLM can benefit from the world model's training data. Nobody has managed to really do that yet
So lots of hopes for the future. Until then they get commercialized as video models, or ways to experience your favorite forest, or to have a really bad video game ... whatever can be sold on a short time horizon to finance the actual goals
There are a lot of areas where predictive models make sense in the robotics stack, but doing it with "video world models" as is trendy this year is likely a bet in the wrong direction according to the evidence we have been amassing in the last 6 months.
One aspect of intentionality is that there’ll be a narrative payoff when you find something you find interesting. In videogames, the world is mostly pre-designed, so the designer has to predict what you’ll be interested in for the most part (In pen and paper RPGs, this is usually done better, because the human dungeon master/DM can plan ahead, but also improvise a payoff or modify the plot between sessions). If there was a world model generated game world, I guess the model would have to be “smart” though to setup and execute those payoffs.
An advantage that the world model would have (and shares with a good human DM) is that everything is an interactable, and the players get to pick what they think is interesting. If everything is improv with a loose skeleton around it, you don’t have to predict as far out. I think world model generated games, if they even become a thing, will be quite a bit worse than conventionally designed ones for a long time (improv can be quite shallow!) but have a lot of potential if they work out.
FromSoft is an interesting example. They make the game more believable by having extremely missable quests, just, most of them don’t block progress through the game, and you usually stumble across enough side quests naturally (although IMO the density was too low in Elden Ring, their system showed a bit of weakness in the less-guided context). The plot is pretty vague, but the vibes tell enough of a story that you don’t really mind. It’s sort of improv/pen-and-paper but the player’s imagination is doing the job of the DM.
Fromsoft is perfectly happy for you to miss all of the direct exposition. It's as they intended and most people do. The intentionality of their world still draws people in and gives the world a sense of groundedness that keeps people coming back and separates it from the pale imitations. It's more than them being good at 'Vibes'.
The environment is built on the bones of a greater ongoing narrative that is intentionally obscured, even from the player who reads everything.
Dark Souls is a world in a constant cycle of Rebirth, Decay, the struggle against Entropy. All civilizations, at the end of the series, are stacked one upon another in an endless expanse of ash and dust as you bear witness to an permanent eclipse, a fading star, as time itself dissolves and the last fire fades.
Before that heavy handed stuff though, the simple matter of the direction your character travels reinforces these motifs. Down to the deepest depths and you'll witness what remains of the first civilizations. Climb up and you see the desperate attempts by the powerful to impose a false order that they hoped could forestall the inevitable.
You can even shatter the illusion of a golden order in the first game if you find the extremely missable secret boss. It couldn't be any more clearly 'said' if you were interested in paying attention.
Adding an AI model to explain or 'improv' the story of the world would destroy the whole purpose.
Many of the most popular games in the past decade are procedurally generated and have nothing “intentionally” placed (apart from tuning/tweaking the balance of the seeding algorithms).
I think you underestimate the intentionality that goes into developing procedural generation. Something like Dwarf Fortress isn't "place objects randomly" - it is layers upon layers of carefully crafted systems that build upon each other to produce specific patterns of outcome
I guess what I'm saying is: Couldn't a world model with targeted training and thoughtfully tuned system prompts be directionally similar to the layered systems to produce specific patterns of outcome?
Are video game developers using these systems in their workflows? Would love to learn more!
The combination of "many", "most popular", and "nothing" is overstating it by a wide margin but for example the majority of the vegetation in games as far back as oblivion was procedurally placed.
It’s been my belief for several years that this is how the future of games will be constructed. Data in the background, game engine for rules application/ physics execution/orchestration/maybe low-poly rendering, an AI world model taking low resolution input in generating customized visuals/effects/textures/everything, even camera location, but still constrained by concrete rules in the game engine.
I’m certain one day it might all be handled by AI, but the above seems much more realistic and achievable that expecting AI to do all of these things, at one, correctly, every frame.
On that first point I think it's important to remember that the lineage of video games comes from board & card games and sports. There's always been an ability to inject more complexity and less-intentionality into those things. Sports in some ways are like a simplified and altered role-play of war battles, and more realistic war roleplaying does exist but it has less appeal.
As humans we like solving things and noticing patterns and the intentionality of games taps into that appeal.
On the latter point I do think these world models will eventually be used to meaningfully contribute to building games. I think people will have to find new ways to design that balances intentionality against the freeform nature of these simulations, but it may take a while to have the capability to do so.
These world models are key for robotic and coherence in video generation.
Give a world model images of a factory, the robot now can simulate tasks and do the best result.
Give a world model images/context etc. and it can generate a coherent world for video generation.
What this world model system might be able to do for us in regards of gaming or virtual reality: Either simulate 'old' environments like the house of your grandparents (gaussian splatting but interactive) or potential new ones like a house, kitchen, remodeling.
It can also be a very interesting easy to approach VR environment were you can start building your world with voice. That would be very intentional. After all world building is not necessarily connected to being able to generate 3d assets. Just because you need to go this route today, doesn't mean you have to do this tomorrow.
Where you look for an intentionally evoked experience authored by a game designer, I am looking for an unexplored world unfolding before me filled with emergent and unique phenomena that perhaps no one and not even the game designer has seen before.
for example, I am 100% certain that ANY model could write a better Dragon Age sequel than the rotting corpse of Bioware did, because only humans can despise their audience and their source material. an LLM would dutifully attempt to produce more of the thing rather than 're-imagine' the thing for 'the modern audience'.
LLMs had nothing to do with this
Yes, we haven't gone that far with creating consciousness yet, but there is gonna be a lot of money around neural computing devices for consumers in the coming decades, so that will speed up knowing what sense data you need for consciousness.
Like for instance... Dwarf Fortress? Minecraft?
Generative AI is just another method to go procedural generation. Not necessarily a better way. Or you could even argue that procedural generation is a form of generative AI... But either way, there are games where the lack of intentionality is central to the appeal.
its a very dead game on its own. they are still very intentional about adding and changing the tools by which you make your own fun though.
DF for some doesn't fit into this category for me. Minecraft feels dead to me, while many other games that utilize procedural generation are not.
You’re right - but that world is not the end of the story. The intentionality matters. Human creations matter because they connect us. I don’t know how long it will take, but people will build judgement as to what makes for good use of these tools to make meaningful things and expand our creative horizons in deeply human ways. Mind you, there will always be shallow slop. It’ll just take time for creators to learn how to use these tools to make something that isn’t slop.
Would you consider it possible that the way non-intentionally placed items break the game immersion for you is because they appear in such a way that you think you can interact with them in a certain way, but you can't?
Like if there's an extra door in the house you're trying to get into, but that door doesn't really open, then in your mind that breaks the integrity of the game's systems. If so, I think the LLM response is that there are no more doors that don't open and that the world can be generated as needed.
No computer can handle the complexity of even a small town. But it would be possible, at least in the future, to generate the part of the world you interact with, which would heighten the emersion.
the intentionally placed tree serves no particular in-game job mechanically. it instead points your eyes to the right place when you walk up the path, and then again when you look back down from above.
when they're saying everything is intentionally placed, they mean everything, whether it looks important or not. It's all directed to a cohesive core
Everyone is right to be skeptical of this coming from a 2.8B model. Weights or it didn't happen.
> 720p, 1-min video generation with 6-DoF camera control
As nl said,
> The model is out here: https://huggingface.co/Efficient-Large-Model/SANA-Video_2B_7...
README says "intended for research use only"
Code license is Apache 2.0
Model license (nvidia open...) says
Models are commercially usable.
You are free to create and distribute Derivative Models
(As usual: model output is unrestricted, and also unprotectable absent human authoring)I hope nobody leaves that page open on a metered or capped network connection.
I'm surprised github hasn't suspended the page.
Are AI researchers so used to burning through compute and network resources that they don't stop to think about a webpage that will autoplay and loop multiple HD videos?
It appears that there are 62 videos on the page. They're generally 16fps and 60s long. All are h.264, 1280x704. The median bitrate is 4.962 Mbps.
I don't know enough about JS to try to understand WTF it is doing, but there's only 1.3 GB of video on that page. At a transfer speed of 400Mbps, the whole mess of them should be downloadable in around 30 seconds.
But it wasn't behaving that way at all. It instead behaved as an excellent bandwidth-waster.
(Woe to those who click this link on metered connection, I guess.)
Empathizing about problems you don't face is a hard product/ux and management skill. Facebook famously simulated 2G on Tuesdays 10 years ago[1] for example to get their employees to see the problems their users have.[2]
People don't to put effort in noticing(solving comes next) problems they don't face. It is why things like a11y and i18n need regulation like ADA etc.
[1] https://engineering.fb.com/2015/10/27/networking-traffic/bui...
[2]While it would be hard to attribute directly, GraphQL and to an extent React probably was influenced by these kind of things
I’m sure they’ll give their Claude instance a stern talking-to.
Also, will this run on RTX 4090 with 24GB memory?
Thank you!
That world-state can be anything, but in the last year or two, the term has taken a narrower meaning: a video generation model that reacts naturally to game-like controls, as if it was simulating a videogame. But there's no additional state behind the video frames.
> A dedicated 17B long-video refiner sharpens texture, motion, and late-window quality on top of the long-rollout backbone.
The 'Refiner' effect seems to do the opposite if the examples are representative as in all cases the 1-st stage images look better than the 'refined' ones. Less clutter, more realistic, less 'cowbell' for those who know the phrase.
And those same people forget that its been 3 years since that awful will smith spaghetti video to what we have today which is the beginning of controllable real time videos aka games
There’s no doubt they’re technically impressive, but what does one do with it?
It is inevitable that learned simulators will replace hand-coded simulators, as it is a straightforward application of the Bitter Lesson: http://www.incompleteideas.net/IncIdeas/BitterLesson.html
By enabling general purpose robotics, world models will be one of the most useful inventions of all time. For examples of what I'm talking about in current research, check:
Dreamer 4: https://danijar.com/project/dreamer4/
DreamDojo: https://arxiv.org/abs/2602.06949
Tesla's world model: https://www.youtube.com/watch?v=LFh9GAzHg1c
Waymo's world model: https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...
This one is probably too small to be useful for that, and not diverse enough? But I could be wrong.
However, there are a few promising markets, assuming WMs continue to get better and cheaper:
1. Robotics training / evaluation: modern end-to-end (sensors-to-control) robot policies require simulators that are almost indistinguishable from reality. If your sim is distinguishable from reality, the evaluation metrics you get from sim don't mean anything and the policies you train in sim don't work. World models will likely be the highest-fidelity robotics simulators, since WMs are data-driven and get arbitrarily more-realistic given more data/compute. This is why so many robotics companies have WM projects [1] [2] [3] [4].
2. Video frontends for agents: in the same way that today's frontier labs are building realtime voice interfaces [5] which behave like a phone call, realtime video interfaces will behave like a video call. Early forms of this don't feel compelling IMO [6] [7], but once the models can instantly blend between rendering the agent itself, drawing diagrams/visualizations, rendering video, etc. I can see it surpassing pure voice mode.
3. Entertainment: zero-shot world generation (i.e. holodeck, genie 3; paste in an image/video/text prompt and get a world) will be a fun toy but I'm not convinced it has any long-term value. I'm more optimistic about proper narrative experiences where each scene/level is a small, carefully-crafted world (behaving like a normal film scene if you don't touch the controls, and an uncharted/TLoU-style narrative game if you do), such that the sequence of scenes builds up a larger story.
[1] https://wayve.ai/thinking/gaia-3/
[2] https://xcancel.com/Tesla/status/1982255564974641628 / https://xcancel.com/ProfKuang/status/1996642397204394179
[3] https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-f...
[4] https://www.1x.tech/discover/world-model-self-learning
[5] https://thinkingmachines.ai/blog/interaction-models/
[6] https://runwayml.com/news/introducing-runway-characters
[7] https://blog.character.ai/character-ais-real-time-video-brea...
Imagine playing Read Dead Redemption 2 and you attempt to ride your horse from Saint Denis to Valentine and Valentine no longer exists, or is a completely different town located half a mile off from where it was originally.
I just don't see how this would work...
You could also use these models to generate assets for a game during development whether that's simple cutscenes or assets produced through gaussian splatting or some other process.
If these models and others can be run cost effectively on a cloud service or even locally at some point then you could do some interesting things by combining them with 3D mesh generation, img2img, vid2vid, etc. just think about even simple games like Papers Please and the whole genre it spawned that uses short episodes where you have to make a guess based on what you see, there's a lot of potential for creating new mechanics around generative imagery.
Remember video generation? 3 years ago the will smith spaghetti video came out.
You see how this trend will only continue? Game development is going to get really weird.
> A dedicated 17B long-video refiner sharpens texture, motion, and late-window quality on top of the long-rollout backbone.
In this case, what looks interesting is the one minute coherence and the massive speedup - they claim 36x over open models with similar capabilities. You can tell they aren’t aiming for state of the art visuals — looks very SD 1.5 in terms of the output quality.
I can't say I'm looking forward to an AI video future.
I'm curious if a younger me would have adapted much faster.
Seedance 2.0, Kling 3 are regarded the best closed source video models we have. I have subscribed to a few AI video subreddits, consensus atm is they are good for anything but long form videos with humans.
No surprises that we're very good at spotting even the most subtle differences while looking at other people.
I've been doing some content with people at https://industrialallusions.com
https://www.reddit.com/r/HiggsfieldAI/
Higgsfield have multiple models available, people use Kling usually 2.5 & 3. There are a few good examples posted right now you'll notice the subtle differences.
I have tried to generate things myself and it's extremely hard to have more than 7-8 clips that are consistent, eventually you'll accept a compromise. I think it's why there isn't any long form content being done yet. Getting good results is sometimes just "chance" regardless of how many reference data you have.
EDIT> dont ask how I came up with this quote
It's honestly impressive, on the surface. The visuals are gorgeous... but it's still empty. What makes a "World" a world is precisely it's coherency. It's not about how it looks but rather how it "works". The plants in an ecosystems are a certain way because of the available resources, all the way to forces like gravity. It doesn't just "look" like that. To echo Konrad Lorenz a fish doesn't just swim in the water, rather the fish IS an efficient representation of the water it lives within. Here in such "worlds" there is nothing happening. There is minimal superficial coherence, no logic, nothing.
The ultimate liminal spaces.