It sounds like the advantages here are:
- Optimized sampling, rather than just every lightmap texel. My idea was to tie the lightmap to LOD, but I feel like this is much smarter.
- Optimized light accumulation, dedicating more resolution to high-light areas to reduce noise
It seems like it has a more advanced "stability" calculation.
Things that are the same:
- Lighting is still incremental - when they e.g. change the light direction, even with optimizations, there's still some ghost light that slowly moves over so I'm not sure how this would work in really dynamic situations (car traffic)
Things that are different:
- It looks like the light data is cached based on the current view. I store light for the whole scene, so there's no light fluctuation when doing camera movement/rotation. I think the tradeoff here is the view-relative caching is probably more optimized (light detail is view invariant) - I think that's mostly important for HD-style assets.
Limitations of both, IIUC:
- Reflections, water, etc. Radiosity is diffuse lighting only. I think you can combine it with other hacks like screen space reflections though
> The fact that you can get physically-plausible light bounce and temporal stability all running in real-time on a web page... on a phone... feels like we're actually in the future.
Even as some things about the open web are in trouble, others are thriving! This was such a great in depth read, learned a ton and got to see great graphics and play with lots of knobs. A+ :)
I think there's some discussion to up that limit on adapters that support it, but right now we're stuck at 10. It would be SUPER beneficial to raise that limit, for a wide variety of projects. Two specifically that I'm working on now are WebGPU implementations of Alber's Markov Chain Path Guiding paper, and the ReSTIR PT Enhanced paper, and they are both similarly handicapped by the storage buffer limit.
Something went wrong Cannot read properties of null (reading 'isInterleavedBufferAttribute')