So I was trying to implement ping-pong buffers in my #webgpu library (emphasis on trying). The code has no errors, but the values are not changing, and I think is basically how my library is built because I think (because I couldn't figure it out) the entries are being swap twice or not being swap at all and am tired, but I have another idea to test.
I've put together an updated version of the Sponza scene with uncompressed PNG and compressed AVIF textures. I wrote about the process and compared the results against KTX.
We are seeking short talks, demos, and presentations that showcase real world use of #WebGL, #WebGPU, #glTF, and #Gaussian Splatting on the web for the 3D on the Web special event during the week of GDC.
If you are building something interesting and would like to share it with the community, we would love to hear from you. Please note that sales forward presentations are unlikely to be selected.
To submit a 10 minute talk or demo, please email your description to [email protected]
While working on spark.js, I realized that normal map compression formats weren’t supported in popular frameworks like three.js. I addressed that gap by adding the necessary support to three.js and wrote an article to shed some light on the topic, drawing from my experience in real-time 3D graphics.
And now I've implemented it. But instead of porting it to the slowest most decrepit retromachine I can find, I'm going the other direction: What kind of visual effects can I achieve if I throw a decent GPU at it?
The video above in 🧵 1 is a screen capture from my Javascript + WebGPU implementation. It's running at 20 FPS, using about 5% of the CPU and 3% of the GPU. More to come...
For the new version of POINTS I'm adding a way to load an HTMLElement as a texture. I think it's a simple way to load a weird font asset without an atlas/spritesheet. Or basically anything in the DOM.
Another experiment with my POINTS library. This is basically an old physarum/slime demo I made a while ago, but rebuilt with compute shaders, particles and a lot more efficient than before.
With the new version of my WebGPU library POINTS v0.6.0 (https://github.com/Absulit/points), I added a bunch of new features like support for depth maps, cameras, bloom and others, so I wanted to make a demo that used all of these.
This demo has a 262K+ instanced particles (cubes) animated in compute shader, movement driven by a noise texture, it uses the depth map to create a slight depth of field and also some particles are emissive and to make the effect complete it has some bloom that affects only the bright particles, this thanks to the HDR output that allows those cubes to have a brightness beyond white and then those values are tonemapped to fit in the final output.
I think I understand better compute shaders than before. A thing that still eludes me is the fact that, even when I know I have 1 million particles, they are not all visible, even when I thought that was the case. I'm not entirely sure what it is, (not an expert but I manage to) it could be some sync issue related to random numbers, like it generates the same value and the particles overlaps
Now that three.js supports BC5/EAC normal maps, I've updated the spark.js gltf demo with new texture assets.
Last time I simply used gltf-transform with default settings, but this time I tailored the settings to each specific texture type and the resulting quality is much better: