So I was trying to implement ping-pong buffers in my #webgpu library (emphasis on trying). The code has no errors, but the values are not changing, and I think is basically how my library is built because I think (because I couldn't figure it out) the entries are being swap twice or not being swap at all and am tired, but I have another idea to test.
I've put together an updated version of the Sponza scene with uncompressed PNG and compressed AVIF textures. I wrote about the process and compared the results against KTX.
We are seeking short talks, demos, and presentations that showcase real world use of #WebGL, #WebGPU, #glTF, and #Gaussian Splatting on the web for the 3D on the Web special event during the week of GDC.
If you are building something interesting and would like to share it with the community, we would love to hear from you. Please note that sales forward presentations are unlikely to be selected.
To submit a 10 minute talk or demo, please email your description to [email protected]
While working on spark.js, I realized that normal map compression formats weren’t supported in popular frameworks like three.js. I addressed that gap by adding the necessary support to three.js and wrote an article to shed some light on the topic, drawing from my experience in real-time 3D graphics.
For the new version of POINTS I'm adding a way to load an HTMLElement as a texture. I think it's a simple way to load a weird font asset without an atlas/spritesheet. Or basically anything in the DOM.
Another experiment with my POINTS library. This is basically an old physarum/slime demo I made a while ago, but rebuilt with compute shaders, particles and a lot more efficient than before.
The idea of the physarum is that the particles leave a trail and are capable of following trails from other particles if the trail is stronger than the other particles, so they eventually follow each other.
The cool thing about this concept is that you can make the particle follow other things rather than the trails, like in this case a video.
The particles have two options: spawn from the center or spawn randomly, and this changes a bit how the video is interpreted.
With the new version of my WebGPU library POINTS v0.6.0 (https://github.com/Absulit/points), I added a bunch of new features like support for depth maps, cameras, bloom and others, so I wanted to make a demo that used all of these.
This demo has a 262K+ instanced particles (cubes) animated in compute shader, movement driven by a noise texture, it uses the depth map to create a slight depth of field and also some particles are emissive and to make the effect complete it has some bloom that affects only the bright particles, this thanks to the HDR output that allows those cubes to have a brightness beyond white and then those values are tonemapped to fit in the final output.
I think I understand better compute shaders than before. A thing that still eludes me is the fact that, even when I know I have 1 million particles, they are not all visible, even when I thought that was the case. I'm not entirely sure what it is, (not an expert but I manage to) it could be some sync issue related to random numbers, like it generates the same value and the particles overlaps