• 0 Posts
  • 181 Comments
Joined 3 years ago
cake
Cake day: August 8th, 2023

help-circle

  • I’d say it’s still accurate for quite a lot of us. Personally I avoid any “smart” device like the plague. I’m kinda done with tech outside of programming. I’d have a dumb phone if it wasn’t such a hassle in today’s society, none of my appliances is connected to the internet (apart from PC and phone), I like using old DSLRs and film cameras because I don’t want to look at another screen when out and about, I read physical books instead of digital, etc. I don’t own a car but if I had one it’d probably be some old piece of shit that just works, without all the smart shit if I can at all avoid it.

    I have printers that connect to the WiFi, but they’re turned off all the time unless I need them. There’s no way in hell my washing machine gets WiFi, nor any other applicance like it. And I’m also very distrustful of video doorbells or even worse, those kind of digital locks that unlock with a phone or something. I’m just tired of everything being connected, everything being a subscription, everything being a security nightmare, everything needing power or having to be charged.



  • For some issues, especially related to programming and Linux, I feel like I kinda have to at this point. Google seems to have become useless, and DDG was never great to begin with but is arguably better than Google now. I’ve had some very obscure issues that I spent quite some time searching for, only to drop it into ChatGPT and get a link to some random forum post that discusses it. The biggest one was a Linux kernel regression that was posted on the same day in the Arch Linux forums somewhere. Despite having a hunch about what it could be and searching/struggling for over an hour, I couldn’t find anything. ChatGPT then managed to link me the post (and a suggested fix: switching to LTS kernel) in less than minute.

    For general purpose search tho, hell no. If I want to know factual data that’s easy to find I’ll rely on the good old search engine. And even if I have to use an LLM, I don’t really trust it unless it gives me links to the information or I can verify that what it says is true.







  • Haha. The video is a bit pessimistic tho, I know people who work at companies with Haskell running in production (who are happy with it). Personally I have used monads, and I’ve wished for their functionality in other languages like Java, but I couldn’t reasonably explain what they are.

    Also, as someone who know just about enough German to understand some of what they’re saying, it’s always quite hard to follow these videos. My brain doesn’t understand it when it hears “Das war ein Befehl!” and the subtitles ramble on about something completely different




  • Honestly, not at all. If CS paid like shit I’d still do it. Out of all the things it’s just what I enjoy most. Studying CS didn’t feel like something I had to do but rather something I wanted to do most of the time. Programming is like solving puzzles but then much cooler


  • Yeah okay but Cyberpunk 2077 and No Man’s Sky were new games, not an ancient game that should easily run on a Switch 2 but somehow doesn’t. And even then it required an insane turnaround before people loved them again. Cyberpunk has undergone a crazy transformation since launch and it’s all for free (as should be expected when you release a dumpster fire).

    This is not an easy thing, and not something you can keep doing constantly. Bethesda seems to be on a roll with releasing broken, overpriced, boring shit for a while now. And constantly milking Skyrim. There are plenty other games that I personally have played that aren’t there yet in this timeline either. Cities Skylines 2 just got a new developer and is still not that great, I don’t they’ll turn it around. Stalker 2 is on the right path (and I personally really liked it on launch and even more now), yet a lot of fans still seem pissed and the game is still properly janky. Pulling a Cyberpunk is the exception, not the rule


  • Damn that’s very lucky. Every device with Nvidia hardware that I installed Linux on has at some point during updates or whatever gone to shit. However I must say that it has become way better in recent years. My Thinkpad was the worst because it was my first Linux device and it had an integrated Intel gpu and a dedicated Nvidia GPU and getting it to work was horror. In the end a friend of mine who was better at Linux just forced it to always use the Nvidia card because then at least stuff worked reliably ™.

    But even then it pretty much always died during Ubuntu release updates. I’ve nuked my whole system once because the screen went black (due to GPU drivers presumably) during one and after an hour or so I forcefully turned off the laptop because I couldn’t do anything anymore. After restarting into a tty my laptop was in some sort of limbo between 2 Ubuntu versions and I basically just had to reinstall.

    Ever since I made Linux (Arch btw) my main OS for gaming at the start of this year it has been quite stable though. I did switch to LTS kernels and after that everything has been pretty chill.


  • In terms of performance yeah. Though not every old device keeps working. You’re still vulnerable to driver support for newer kernels. My old Thinkpad no longer functions properly because the Nvidia drivers are not compatible with newer kernels. I can either have an unsafe machine that runs fine or an up-to-date machine that can barely open a web browser.



  • Fun fact, this loop is kinda how one of the generative ML algorithms works. This algorithm is called Generative Adversarial Networks or GAN.

    You have a so-called Generator neural network G that generates something (usually images) from random noise and a Discriminator neural network D that can take images (or whatever you’re generating) as input and outputs whether this is real or fake (not actually in a binary way, but as a continuous value). D is trained on images from G, which should be classified as fake, and real images from a dataset that should be classified as real. G is trained to generate images from random noise vectors that fool D into thinking they’re real. D is, like most neural networks, essentially just a mathematical function so you can just compute how to adjust the generated image to make it appear more real using derivatives.

    In the perfect case these 2 networks battle until they reach peak performance. In practice you usually need to do some extra shit to prevent the whole situation from crashing and burning. What often happens, for instance, is that D becomes so good that it doesn’t provide any useful feedback anymore. It sees the generated images as 100% fake, meaning there’s no longer an obvious way to alter the generated image to make it seem more real.

    Sorry for the infodump :3