• 1 Post
  • 3.28K Comments
Joined 1 year ago
cake
Cake day: February 10th, 2025

help-circle








  • there is some crackling with plenty of forum posts explaining how to fix these things going back to 2005 that are no longer relevant because the sound uses something with a different name now.

    This is almost always because your pipewire buffers are too small (because of the defaults erring on the side of low latency) and so when the CPU is busy the buffers empty and you get some crackling. Use pw-top to see all of your devices and sources, next to the devices you should see a number in the QUANT column. Chances are that this is really low (or 1)

    You can change your minimum buffer (pipewire calculates this by setting a ‘quantum’), temporarily with :

    pw-metadata -n settings 0 clock.min-quantum 512
    

    You can edit /etc/pipewire/pipewire.conf and add a line under the [clocks] section:

    default.clock.min-quantum = 512
    

    Restart pipewire for the setting to take effect:

    systemctl --user restart pipewire
    

    (If your sound ever just dies for no reason, restarting pipewire is often all you need to do)

    Use the temporary setting to increase the number. Lower number means a shorter buffer so, you get less audio latency in exchange for the risk of the buffer emptying. I don’t have much problem with 256, but sometimes Proton adds some extra CPU overhead and I’ll bump it up to 512.






  • The post is about the actual product they’re selling, not whatever idealized idea of what a ‘proper’ LLM is.

    Yes, that is what the post is about.

    You didn’t click on reply to the post, you clicked reply under my comment.

    In my comment, I was talking about an LLM (I checked with myself) and the other person was also talking about LLMs and on up to the top of the comment chain where we started talking about LLMs in IT systems.

    From the context of the conversation, you should understand that we’re talking about LLMs, specifically being in IT having to deal with LLMs. The context should tell you that we’re talking about the actual language models and not the end user applications, like a chatbot.

    If they aren’t selling non-chatbot LLMs then that’s irrelevant.

    Ok, well this is easy then. Every LLM isn’t sold as a chatbot so I’m not sure why you keep repeating this like it is a point.

    If every LLM sold is sold as a chatbot, then this “ummm ackchully” is irrelevant.

    Your first comment was ‘ummmm ackchully LLMs are only chatbots’ which is both wrong and ironic.