spoiler

made you look

  • 0 Posts
  • 167 Comments
Joined 2 years ago
cake
Cake day: July 27th, 2024

help-circle
  • Compared to e.g. pushing a button in VS code and having your browser pop up with a pre-filled in github PR page? It’s clunky, but that doesn’t mean it’s not useful.

    For starters it’s entirely decentralised, a single email address is all you need to commit to anything, regardless of where and how it’s hosted. There was actually an article on lobsters recently that I thought was quite neat, how the combination of a patch-based workflow and email allows for entirely offline development, something that’s simply not possible with things like github or codeberg.

    https://ploum.net/2026-01-31-offline-git-send-email.html

    The fact that you can “send” an email without actually sending it means you can queue the patch submissions up offline and then send them whenever you’re ready, along with downloading the replies.


  • Sourcehut uses it, it’s actually the only way to interact with repos hosted on it.

    It definitely feels outdated, yet it’s also how git is designed to work well with. Like git makes it really easy to re-write commit history, while also warning you not to force push re-written history to a public repo (Like e.g. a PR), that’s because none of that is an issue with the email workflow, where each email is always an entirely isolated new commit.



  • Windows is pretty much the same as Linux, it exposes the raw events from the device and it’s up to the app to handle them. Pretty sure the overlay handles that by sitting between the OS and the game and e.g. translating everything to Xbox style controls if the game needs it (And getting out of the way if it doesn’t)

    Outside of that, well Valve added support for the controller to SDL, so anything using it will be fully supported. But then the game needs to actually be using a new enough version of SDL, otherwise it’ll just see a generic controller device, and that can be hit or miss.



  • I’ve got some numbers, took longer than I’d have liked because of ISP issues. Each period is about a day, give or take.

    With the default TTL, my unbound server saw 54,087 total requests, 17,022 got a cache hit, 37,065 a cache miss. So a 31.5% cache hit rate.

    With clamping it saw 56,258 requests, 30,761 were hits, 25,497 misses. A 54.7% cache hit rate.

    And the important thing, and the most “unscientific”, I didn’t encounter any issues with stale DNS results. In that everything still seemed to work and I didn’t get random error pages while browsing or such.

    I’m kinda surprised the total query counts were so close, I would have assumed a longer TTL would also cause clients to cache results for longer, making less requests (Though e.g. Firefox actually caps TTL to 600 seconds or so). My working idea is that for things like e.g. YouTube video, instead of using static hostnames and rotating out IPs, they’re doing the opposite and keeping the addresses fixed but changing the domain names, effectively cache-busting DNS.




  • What they’re saying is that a web server can create a traditional jpeg file from a jpeg xl to send to a client as needed.

    Other way around, you can convert a “web safe” JPEG file into a JXL one (and back again), but you can’t turn any random JXL file into a JPEG file.

    But yeah, something like Lemmy could recompress uploaded JPEG images as JXL on the server, serving them at JXL to updated clients, and converting back to JPEG as needed, saving server storage and bandwidth with no quality loss.





  • So is this a matter of people turning mostly-static websites into React monstrosities or is it something else?

    Yep, replaced simple HTML with JSON and client side templating, realised it was inherently slower so re-invented server-side generation (now called SSR, server-side rendering, because everything needs a fancy name), and then merge it all together on the client (rehydration).

    All this for content that is 99% static and doesn’t need that level of interactivity, even the linked site is doing it for some reason, and they don’t even have comments or something that would explain it. They’re using it purely for navigation where a plain link would suffice.