• 4 Posts
  • 131 Comments
Joined 3 years ago
cake
Cake day: June 13th, 2023

help-circle
  • I think the question of fair use is separate from the question of piracy, and probably separate from the question of intellectual property in general. Even if we were to protect fair use, that doesn’t make it legal to wholesale copy books. Individual piracy from people who can’t really afford it is one thing and largely harmless, even a net good. I know people who only started reading books from particular authors because they pirated one copy and bought others. That’s very different from a company downloading entire libraries of books without paying. Shifting the question from piracy to fair use is just another way of making you think of the wrong question.

    I’d like to live in a world that doesn’t gatekeep property. But we live in a world where artists aren’t paid for their work directly, and in that world intellectual property is necessary.





  • Having tried simple bidets in both warm, cold, and neutral-ish climates, I find that cold water bidets seem to stiffen the poo bits and make it hard to actually get them off your butt esp since they stick to the hairs. You and I might be talking about different levels of cold, though.



  • You should give Claude Code a shot if you have a Claude subscription. I’d say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won’t suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It’s actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.

    But of course, the Devil’s in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.



  • Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that’s roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.

    Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.







  • I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.