• FlexibleToast@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 days ago

    A 120b parameter model is small compared to the models running in datacenters. However, this does seem like the current “Moore’s Law” for AI. Finding more and more efficient ways to run larger parameter models.

  • Kissaki
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 days ago

    The article isn’t very concrete on what the substance of this Anyway System is.

    The FAQ answered my question though, and it seems mostly about dynamic management of PCs as execution nodes.

    I assume that means I could run the model on one of those PCs as well, despite the article claiming you can use as few as four PCs? Or does this system somehow distribute a single model prompt process into multiple execution pipelines?

  • Mikina
    link
    fedilink
    arrow-up
    1
    ·
    11 days ago

    Do we really need datacenters to host (load balanced) web servers and services?

    This could challenge the business model of Big Tech!